Thursday, November 24, 2011

Tuning the Dot Net Compact Framework 3.5 Application- Part I

In the era of smarter smart phones and sophisticated mobile hardware specs like dual core processor , 800 MB of RAM and operating systems like iOS or Android or may be WP 7 depending on the maker and device limitations, yet majority of mobile devices in the enterprise world like distribution, retail and logistics are using a Pocket PC (PPC) or a Windows Mobile (WM) based devices some even use devices running on Windows CE, mostly because of the special auxiliaries such as inbuilt laser scanner ,camera and phone, resulting which we have the heavy duty smart devices like the Motorola MC 70 or the MC 17 ,Such devices generally come with minimal resources or computing capacity available, the specifications like generally like CPU - Intel PXA270; 32 bit; 520 MHz  or a ARM ,Memory - 64 MB RAM; 64 MB Flash ROM
OS - Microsoft Windows CE 5.0 Professional Version, hence the application performance plays a pivotal role operationally, for the developers it's a real challenge to code efficiently on the .Net Compact framework which is a compact version of the full .net framework, resulting in limited support for features like the garbage collection and the UI dispositions, some respite for developers is that with CF 3.5 we get to use WCF and some C# 3 language support like the LINQ , auto-property and so on.

Even with the best of practices and architectural design put in place whilst development or before the development there would certainly be lots to learn by the time we have the first piece of code running on the device, implies that design is a evolving process in mobile development at least in this case, we would have to continually monitor performance based on certain classification so that we are focused in our approach, the performance of the device could be classified as follows



AM - Often relates to OOM [Out Of Memory] exceptions, mostly due a weak GC and no support for bitmap compression in compact framework. hence it the bitmaps and controls has to be cautiously used and disposed off when the job is done.
CU - Relates to the battery life as per laws of performance CPU usage is directly proportional to the the battery life and hence this counter becomes even more critical after memory.
BL - again theory of relativity the battery life is directly proportional to the read write and CPU usage.

PT- Time taken in screen transitions in case of a complex forms application, this metrics is very specific to the way we instantiate and load objects throughout our application.In addition to the process time the read write on the log files must be optimized so that we don't use unnecessary read/write cycles which is a expensive operation , instead of doing a instant write on the log file have it in buffer up to certain limit and than write it on the log file by doing this we can optimize read/write.
The best way to monitor performance is by having resource specific counters on the device , compact framework ships with tools like RPM , lets have a look at how we get these counters ticking.

Monitoring Counters
How would you find out that whether there is a leak in memory or pinned object count , monitor CG counters when the application in running on the device ?
The best way to monitor these counters is to use the Remote Process Monitor RPM and hook it up with the process monitor (Processmon) , follow the below steps to configure the RPM and counters to hook it to your device.

Step 0 : Connect the device to your dev box, through active sync or TCP.
Step 1 : Make sure the executable is available in the following location "C:\Program Files\Microsoft.NET\SDK\CompactFramework\v3.5\bin\NetCFRPM.exe
I personally prefer  the earlier version of RPM which does not crash like the later "C:\Program Files\Microsoft.NET\SDK\CompactFramework\v2.0\bin\NetCFRPM.exe" and hence would continue the show using v2.0 , the option are pretty much the same , feel that 2.0 is more stable than 3.5 RPM.
Note C:\ => Change the drive letter to the one you have installed it on.
Step 2 : Copy the files netcflaunch.exe & netcfrtl.dll from location "C:\Program Files\Microsoft.NET\SDK\CompactFramework\v2.0\WindowsCE\wce500\armv4i" on to the Windows directory of the device (my case it is a ARM, take the files from mipsiv if its a MIP based as all types are available)
Step 3 : Run the exe and add live counter by clicking on Live counters as shown below


Step 4 : Connect to the device and specify the location and name of the exe and connect , make sure the application is currently not running on the device as the RPM will automatically invoke the exe specified (as by default launch on connect option will be enabled).


Step 5 : Make sure the below shown option are selected for the RPM to publish the metrics to the process monitor


Step 6 : To save a stat of log file make sure the options are selected from the device menu , so that
the artifacts can be used to compare the counters after making any performance tuning.


Step 7 : Now open the performance monitor (Win Key + R -> type -> perfmon - > enter) and add the necessary counters as shown below



Once you have done all the above settings , now you are ready will all required ammunition to have a crack at the performance of your application in the device, as a result of this exercise you will have the performance graphs as well as the  RPM counters which looks like this




The reports should give you a clear idea about memory , windows forms and GC details in depth which would help in identifying any bottleneck in the application , running many runs with different set of data and analyzing the data would give a clearer picture on the performance. hope this post helps you in setting up RPM and set the performance monitoring rolling. more on the counters and performance tuning best practices will follow in the future posts on the same topic. until then happy profiling.

Tuesday, November 15, 2011

What is an ideal agile development environment ?

Have been working on agile projects almost for 4 years now more often then not i have realized that environments for Development , Build and Test in short DBT plays a huge role in achieving the sprint goals [stretch goals more often if your estimates goes hay wire ].

By now you must be thinking what is this guy up to ?? when agile manifesto says "Individuals and interactions over processes and tools" which to me or may be to the manifesto in itself would relate the tools for tracking and logging and not the DBT environments, hence every developers / testers prefers having a definitive process for the story transition from one state to another state remembering not going away from the manifesto.
Now whats a smooth transition of a story ? lets assume we have picked 5 functionality which sums up to 50 stories in our current sprint  , i should be able to achieve this in 5 days , considering my working hours is 5 and my capacity is 2 developers/testers. an ideal state assuming so !! now lets get to the transitions.
Environments plays a vital role on every state , how do we make sure that at every state we keep doing the right thing and make sure we "release early and release often" to the test , to get continual feedback and make sure at the end of sprint we have a tested working piece.

Continuous Integration (CI)  - Continuous integration involves integrating your code early and often (ideally after completion of a story , may be a defect fix or a code refactoring and not otherwise), so as to avoid the pitfalls of "integration hell". This practice aims to reduce rework and thus reduce cost and time, most importantly immediate feedback on your check-in.to achieve the best of CI , one has to follow the best practices of CI the first and most important being "Before any code commit , make sure you get the latest code from the source control , build the solution on local and make sure there aren't any build errors." as the source control could have been updated by any others code commit. 
setting up the CI and auto deploy could be another topic worth covering , may be in the future posts.
now getting deep in to asking ourselves more questions on why should my CI be in place for a typical agile environment .firstly it helps developers detect and fix integration problems continuously - avoiding last-minute chaos at release dates.more advantages like 
  • early warning of broken/incompatible code
  • early warning of conflicting changes
  • immediate unit testing of all changes
  • constant availability of a "current" build for testing, demo, or release purposes
  • immediate feedback to developers on the quality, functionality, or system-wide impact of code they are writing
  • frequent code check-in pushes developers to create modular, less complex code
  • metrics generated from automated testing and CI (such as metrics for code coverage, code complexity, and features complete) focus developers on developing functional, quality code, also we have a host of plugins available for CI.
when there's so many advantages there should be some disadvantage as well as consolation for those who doesn't want to use CI , but in my view we should call them as investment than a disadvantage, the investments would be like 
  • initial build box setup time required and getting developers up to speed with training on CI tools.
  • well-developed test-suite required to achieve automated testing advantages
  • Dedicated build machines.
Tools Available
Most widely used would be Hudson and Cruise Control .net, teams using TFS2010 are getting used to the build in CI tool you can find the comparison chart of different tools here  , my personal pick is CC.Net probably because we have been using it longer than any other.

well to sum it up , the right kind of CI would certainly reduce deployment and release overhead on the dev and test teams , also an easy way to maintain clean code and running solution any time !!!

Release early release often !! go CI way..  

Wednesday, November 9, 2011

Leveraging Resharper & plugins.

I have been using resharper for more than 3 years now, and i must admit that with out that i am tad uncomfortable on the visual studio IDE, so very addicted to resharper, to me it is more than just a productivity and refactoring plugin for visual studio , I have learnt clean coding through resharper and its on the fly suggestions to avoid any code smell have worked wonders for me over the years , we call it the ALT + ENTER programming.

Its not only the alt+enter the shortcuts [shortcut sheet permanent part of my desk soft-board] and navigational support makes you feel like the king of the town , you just do a Ctrl + n and you get the search window which supports cool search options like say your class name is MyClass you just type MC in the search window there you go all class having name with M and a C in the drop list for you to choose what you want. there are so many navigational options once you have mastered you can just speed in whilst coding  or while navigating through a solution or even when doing a code review, what more write a complex for loop or a for-each loop on the code resharper will convert it to a complete or part LINQ   for you to learn LINQ on job :-)

Code Inspector
Best is yet to come, should tell you about the code inspector an amazing feature in resharper, you just right   
click on the file or project or even at a solution and select Find Code Issues  will run through the code and point you all the code smells and recommendations which either you ignored in the first place or a piece of code developed by some one who is not using a resharper and hence any these recommendations were not available. now that you have run the code inspector you have the honors to clean up the smells.

The inspector analyzes the solution or which ever level you have selected to inspect and gives out the results in a punctiliously categorized manner as shown below.

which gives you the liberty to prioritize which category you would want correct first , I would certainly go for the Potential code quality Issues, will make sure that i have minimal or no code quality issues. 
and the rest can be cleared as we go along . on expanding shows you the exact line number of the class , double clicking will take you to the problem area [a known issue on version 5.2 what i have observed is that double clicking doesn't work as it should be , must be fixed in version 6 i don't have a license for 6  :-( ] once you are in the pointed issue area all that you have to do is ALT + Enter , apart from the minor issue of double clicking, the inspection process works like a charm.

Refactor
This functionality is the best pal when you are going through Martin Fowler's books on refactoring
just makes the process so easy that you would feel like taking up a contract just to clean up all the code, as you can see on the image it gives you all the options which are only applicable for any selected item avoiding any unnecessary refactoring. 
Extract method and extract Interface have been my best friends :-)



The Var debate
One of the longstanding debates since the C# 3 support for implicit type declaration is the usage of Var type instead of a explicit declaration and here's my view of using a var. 

Use Var here : 
In this case from the RHS we clearly know that scope is a TransactionScope type.and hence take the resharper recommendation of using the implicit type and change it. another area where this can be used is when you have to assign any anonymous type returned by any LINQ statements.

Dont use Var here :
In this case, if you use var any one apart for you wold know what 
is the type for Order ID, this holds good for any primitive data types. 



the above is purely for the readability and maintainability perspective and would not have ant performance or memory impact.

Resharper has a fairly long list of plugins available among which the power tool being my favorite lets take a look at what in store, firstly you will meet the agents belonging to the code practices and improvements category 

  • Agent Johnson
  • Agent Ralph
  • Agent Smith

  • then would be the explosion of  ReSharper PowerToys unzip to know the list of  toys one to  look out for is the tool for Analyzing cyclomatic complexity of method bodies.[VS2010 code metrics covers this anyways]
    to name a few more Style cop and TDD helper and so on... more here

    with support for C#, VB.NET, XAML, ASP.NET, ASP.NET MVC, JavaScript, CSS, XML and unit testing, that's almost all support, definitively a must have for all developers.

    Start sharping your code if you are not yet  !!   

    Monday, November 7, 2011

    Whose share is it anyway ?

    It seems like Microsoft is loosing a lot of ground in the browser battle ,with chrome speeding away. cartoonmela had this pretty cartoon which sums it up on the current state of affairs, as internet explorer drops below 50% of web usage, there is more on Microsoft's plate to think about than just Internet Explorer 10 preview. whilst Firefox and safari joining hands loosing no ground, humble opera being as niche as ever not worried about capturing the world, chrome is eating away internet explorer's share. However Internet Explorer still retains a majority of the desktop browser market share, at 52.63 % ,1.76 % drop from September.desktop browsing makes up 94 percent of Web traffic, and the rest from phones and tablets, both waters in which Internet Explorer is toddling. As a share of the whole browser market, Internet Explorer has only 49.58 percent of users.

    Just pondering around came across an interesting statistics about who's what in the browser market. this is what it says.








    the most interesting part of the statistics was the user adoption of the browser's,




    more of this statistics here , to summarize looks like its chrome all the way in desktop and safari in mobile, in the future posts lets explore the idea and technology behind chrome and safari what makes them so special and Internet explorer not so very !

    Saturday, November 5, 2011

    Whats cooking with HTML 5 ?

    We have been hearing too many things about [Open] HTML5 and [Adobe] Flash ,on how HTML 5 could end flash dominance in web world both in PC and mobile , of course nothing would change in Mac or Linux world as there is very less support for flash even as today , sighting  performance and battery (in case of mobile devices) overhead due to flash's excessive CPU utilization, adobe's header boards read that they are working on the the performance glitch and claims to have achieved some improvement on their flash 10.1 version on windows through hardware acceleration  , on the mac world adobe says , apple isn't allowing Flash to become more efficient on their Mac OS X/Safari platform (or their iPod/iPhone/iPad one, either) by not providing the access to the hardware it needs to reduce its CPU load. Adobe is waiting and watching to see if they do get the API access.

    There is lot of development work in progress on HTML5 headed by Web Hypertext Application Technology Working Group (WHATWG) backed by heavy players like Apple and Google, eager to know whats new on HTML5 ?  categorically the ares to look for are
    • New tags and types - The "Semantic web" <header> and <footer> , <section> and <article> new input types like email , URL , number,date. ...form validation                                                 <!DOCTYPE HTML>
      <html>
      <body>
                   <video width="320" height="240" controls="controls">
                   <source src="movie.mp4" type="video/mp4" />
                   Your browser does not support the video tag.
                  </video>
      </body>
      </html>
    • Canvas - Allows you to draw complex graphics , could potentially match up flash , liked by the many in Apple and Google world.
    • Media - Audio and Video tags allows you to create AV controls and gives you freedom to control with HTML tags and attributes. 
    • Geo location - Tells a web app where you are , supports JS API you just have to say "navigation.geolocation.getcurrentposition""and you get the location.
    • Drag and drop - Support for drag and drop files from desktop to browser.and other controls drag and drop on the browser. 
    • Offline and local storage - Allows you to cache the website locally on the client machine and offline browsing of a site would just make no difference unless there's any real time data dependency updates.

    Learn More on the tags and semantics here : W3schools , Comparison chart on HTML 4 & 5  , Demos 

    Flash V/s HTML 5 what the experts say ?

    According to a report released recently, 34% of the world's top 100 Web sites were using HTML5 - the adaptation led by search engines and social networks. Facebook announced the launch of the HTML Resource Center, giving developers tools to build, test and deploy Facebook applications

    Even as innovation continues, advancing HTML5 to Recommendation provides the entire Web ecosystem with a stable, tested, interoperable standard. The decision to schedule the HTML5 Last Call for May 2011 was an important step in setting industry expectations. Today we take the next step, announcing 2014 as the target for Recommendation. Jeff Jaffe, Chief Executive Officer, W3C

    According to Adobe,
    • 85% of the most-visited web sites use Flash,
    • 75% of web video is viewed using the Flash Player,
    • 98% of enterprises rely on the Flash Player, 
    • 70% of web games are made in Flash
    Jan Ozer an expert in video encoding technologies,  recently put HTML5 up against Flash in a series of tests that pitted the two technologies against each other on both the Mac and PC and in different web browsers including Internet Explorer 8, Google Chrome, Apple Safari and Mozilla Firefox. summary of his tests below

    Mac Tests
    • With Safari, HTML5 was the most efficient and consumed less CPU than Flash using only 12.39% CPU. With Flash 10.0, CPU utilization was at 37.41%.With Flash 10.1, it dropped to 32.07%.
    • With Google Chrome, Flash and HTML5 were both equally inefficient (both are around 50%).
    • With Firefox, Flash was only slightly less efficient than in Safari, but better than in Chrome.
    Windows Tests
    • Safari wouldn't play HTML5 videos, so there was no way to test that. However, Flash 10.0 used 23.22% CPU but Flash 10.1 only used 7.43% CPU.
    • Google Chrome was more efficient on Windows than Mac. Playback with Flash Player 10.0 was about 24% more efficient than HTML5, while Flash Player 10.1 was 58% more efficient than HTML5
    • On Firefox, Flash 10.1 dropped CPU utilization to 6% from 22% in Flash 10.0.
    • In Internet Explorer 8, Flash 10.0 used 22.41% CPU and Flash 10.1 used 14.62% CPU
    With the build version of HTML 5 & flash currently we have ...... 
    Circumstances in which it would be appropriate to use HTML5
    If you want a vendor-neutral format, to best respect viewers' freedom of software/hardware choice.
    If you are running your video on low-end systems.
    If you are looking to save costs (and not have to purchase Flash licenses).
    If you want your video/application to be supported on the iPhone, iPad or other mobile Apple platforms.
    If it is important that you work with an open development environment.
    Circumstances in which it would be appropriate to use Flash
    If your product needs to support a wide variety of browsers, including older models like Internet Explorer 6.
    If you offer video streaming in several bitrates, and want clients to dynamically select between these based on network bandwidth.
    If you do not want people to copy your content.
    If you want to be able to splice in commercials dynamically throughout the video.
    If you need integration with webcams and microphones for interactivity, like two-way.

    Its too early to say that HTML 5  will eat in to flash's dominance , but there's something certainly cooking out there in WHATWG camp with strong backing of Apple and Google, to me less flash means no weekly flash updates. looks like GAME ON !!!.      

    Thursday, November 3, 2011

    Console.WriteLine("Hello Cloud") ;

    It might be an late blight to have a post on introduction to cloud when the cloud space is getting denser and started raining innovative ideas, tools , frameworks on and on..... for the benefit of late starters will take a late flight and catch up with the elephantine hadoop for distributed processing of large data sets across clusters to a healthy eucalyptus software services for your secure private cloud and not to forget some physics with elasticity  certainly no Young's modulus for the EC2 the stretchable computing capacity in the cloud.looks like there is big family out there to provide stability, scalability and most importantly security.
    Before going deep dive in to the tools and frameworks on cloud , lets unabashedly get in to the basics of cloud and see whats seeding-in there.
    What is a Cloud ?
    The Wiki says Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). should be no brainier. 
    Whats on offer ?

    Cloud Modes
    Deployment modes of the cloud infrastructure depends on domain and info security. you get to choose the right fit for your organization.   
    1. Public cloud Public cloud or external cloud describes cloud computing in the traditional main stream sense, whereby resources are dynamically provisioned on  self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who bills on a utility computing basis.
    2. Community cloud A community cloud may be established where several organizations have similar requirements and seek to share infrastructure so as to realize some of the benefits of cloud computing. With the costs spread over fewer users than a public cloud (but more than a single tenant) this option is more expensive but may offer a higher level of privacy, security and/or policy compliance. Examples of community cloud include Google's "Gov Cloud.
    3. Hybrid cloud The term "Hybrid Cloud" has been used to mean either two separate clouds joined together (public, private, internal or external), or a combination of virtualized cloud server instances used together with real physical hardware. The most correct definition of the term "Hybrid Cloud" is probably the use of physical hardware and virtualized cloud server instances together to provide a single common service A combined cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises". By integrating multiple cloud services users may be able to ease the transition to public cloud services while avoiding issues such as PCI compliance.
    4. Private cloud Private cloud and internal cloud have been described as neologisms, vendors use the terms to describe offerings that emulate cloud computing on private networks. These products offer the ability to host applications or virtual machines in a company's own set of hosts. These provide the benefits of utility computing -shared hardware costs, the ability to recover from failure, and the ability to scale up or down depending upon demand.
    What tools are available ?




















        






    idea of this post was to familiarize with cloud terminologies and a enabler for more technicalities on cloud and time for Billy Joe "Rain drops falling on my head......" 
    hope its fulfilled !! 


    Wednesday, November 2, 2011

    On the GO !!!

    Since the world is glued to mobile Internet, lets sneak a preview on how it looks like and whats in store for future , looks very promising and no more arcane !!