Time for developers to go home and chill

After release of LINQ to SQL, everyone was claiming that there is no need to worry about Data-Access, so half of developer’s work can be done though a small wizard rather than spending days on them.

 

Now situation has been further worsen for developers with the release of ASP.Net MVC and now with “SubSonic MVC Addin Updated for Beta 1” where you even don’t need to worry for CRUD forms, branding, styling and master page. All will be generated automatically for you. You just need to learn how to use project wizard to generate all of stuff for youJ. 

Good bye coders and welcome domain-developers.

IIS – Web Application Proxy Settings for internet access

For adding proxy server, within the Configuration node in web.config for the web application, add following…


<system.net>
<defaultproxy>
<proxy bypassonlocal="true" proxyaddress="http://ProxyAddress:PortNumber">
</defaultproxy>
</system.net>

Scripts tp change SharePoint service accounts’ credentials

I was trying to change passwords for my SharePoint environment and I stumbled across these very nice Microsoft articles.

This Microsoft KB talks about changing the service account/service account password for WSSv3 and MOSS2007:
http://support.microsoft.com/kb/934838

Joel Oleson’s article talks about some additional tips including updating Search Service credentials:

 

Installing CRM 4 over SQL 2008 and Windows 2008

Yesterday, I decided to install CRM 4 over SQL 2008 and Windows 2008. I have already installed CRM over Windows 2008 but it wasn’t problemtic as I faced issues only with Windows firewall, but this time it was really challenging.
 
I will try to be specific and will discuss point by point issues that I faced and how I resolved them:
 
    • As part of my security policy, I disconencted all my internal servers from internet to ensure restricted deployment scenario. After installing SQL 2008 on a separate box, when I ran CRM setup, it gave me options to either download updated installation files or use existing ones. I decided to use existing ones as I didn’t have internet connection on my server. After passing though all configuration steps, when I reached to last step which was for system diagnostics, I got many errors, out of which I would like to mention 2 ones: the application complained about missing msftesql and cisvc services.

    • For cisvc.exe, you need to start Windows indexing service on CRM box.
    • Now for msftesql, it is renamed in SQL 2008, so you cannot find this service anywhere on SQL box. For this, hotfixes are available on Microsoft site but if you will download these hotfixes and try to use them in patch file as mentioned in Microsoft site, it will not work for you and will give you same error. So now what to do. I am stuck. Fortunately it is also solvable as if you will use the "Update installation files" option of setup to update your setup files, it works like a charm but if you will use patch files as part of your installation, it will not work. So after trying few times, I realized that I have to open my connection to Internet on CRM box and let it download updated files.
    • Now i started getting another error after getting verified for above issues which was "The SQL Server ‘{0}’ is unavailable". Hmm, new challenge. I tried to google and found my resolutions but neither of them worked for me. Finally I realized, it has something to do with Windows firewall of Windows server 2008. So I disabled it and it worked for me. For details of ports which need to be manually opened on SQL box using Windows firewall, consult here
    • Hmm, now I am done with issues on verification screen, so can start deployment. Here I will like to mention one more thing that I was using domain account for CRM service not Network Services. So when this time, I started my installation after getting my environment verified, I got an error which was "Action Microsoft.Crm.Setup.Server.ConfigureAspNetAccountAction failed". Again fortunately, it is a known issue and a workaround for it is document on Microsoft support site which is to use Network Service account rather than domain account :S. 
    • Till this time, I was quite frustrated with stupid things and decided to have a break before again continuing. After break, I rerun the setup, went through all above pains and when I reached previous point, using Network Service Account, I was able to move forward, thanks to GOD, but wait a minute, I got another error. This time, it was ""The specified path is not a metabase path." Platform Error: System.Exception: Action Microsoft.Crm.Setup.Server.RSConfigAction failed. —> System.ArgumentException: The specified path is not a metabase path." What the hell is this. Again a know issue, again workaround exist on Microsoft Support Center.
    • After following this, finally i was able to deploy CRM 4 on SQL 2008 and Windows 2008. It was painfull but possible, so we were able to deploy CRM within same day

Hope this will be beneficial for the others who are about to go through same pain and I will be able to share some of their pain.

 

SharePoint Forum web-part

I m sure everyone of you will be frustrated with limited features of SharePoint discussion board for internet facing web sites as it does not support multiple-forums, categorizations, ratings and other stuff out-of-box. Even codeplex forum web-part has some limitations for Internet facing websites. So now try this open source project and you will be amazed. http://www.yetanotherforum.net/. The best thing is that you have its source code, so you can do whatever you like to do with it.

Search Crawling Vs Indexing

Well it is a common question that what is the difference between search crawling and indexing. Although most of us must have studied this in their graduation but we are used of studying just to pass exams and these were boring topics.
 
So I will just give a brief of differences between these two:
 

Crawling

 
Crawling means to pull contents from your seach source. It is a dump process where by crawling engine pulls data and documents from all specified sources and caches results to deliver to another process normally indexing engine. In itself it does not do anything. It is just like stress-engine where you are generating network traffic thus consuming memory and resources of the machine. So as a matter of fact, in WAN scenarios or high performance scenaios, it is always advisable to have a dedicated crawling server different from your normal web and indexing server.
 

Indexing

 
Indexing is the key of searching. It means making sense out of the retrieved contents, storing the processing results in a (more or less complex) document index. Link analysis is a way to measure URI importance, popularity, trustworthiness and so on.
 
HTML elements are available which can direct crawling or indexing engine either not to crawl or index contents. These are commonly utilized by web services. These are defined in meta element and are content="noindex" or content="disallow" 
 
Hope it will help in clearing things for everyone.