Hacking SLN Files to Improve Visual Studio/Visual Source Safe Integration

There are some who think Microsoft Visual Source Safe (VSS) is great. There are some who think that it’s not great but it is pretty good. And there are some who think Source Safe is just good enough. I am not among any of those groups.

One pain point with Source Safe in a Microsoft tool chain is integration with Visual Studio. Historically Visual Studio has stored source control information directly in the solution and project files. This reveals itself to be a really terrible idea the first time a project or solution file is shared in Source Safe. Checkouts from within the IDE will use the the source control bindings in the solution or project. If the solution or project file has been shared and not changed the bindings will be pointing to the original location in Source Safe. Checking out through Visual Studio will checkout the wrong file.

Yes, the solution and project files can be branched and the bindings updated. That’s sub-optimal. It means branching just to fix a tool problem and not because of any actual divergence in the code base. Any non-divergent changes to the solution or project must then be manually propagated to all the branched versions. Ugh.

The problem has not gone unnoticed at Microsoft. Over time and new releases of VStudio and VSS, improvements have been made.

My current position of employment is not using the latest and greatest. We’re on Visual Studio 2005 and Visual Source Safe 2005. The code base is in C#. I worked out a way to have shared solution and project files for non-web projects. Web projects in 2005 are different beasts and don’t follow the same rules. If you are using different versions of Visual Studio or Visual Source Safe your mileage will vary.

When getting files from VSS into a working directory, Visual Source Safe 2005 will create or update a mssccprj.scc file if any of the files present include a solution (.sln) or project file (for C#, .csproj). The mssccprj.scc file is a text file and it will contain the VSS bindings for the solution and project files that are in the same directory as the mssccprj.scc file.

In Visual Studio 2005, project files by default use the special ‘SAK’ string in the project file settings for VSS bindings. SAK indicates to Visual Studio that the bindings in the mssccprj.scc file should be used. The mssccprj.scc bindings are based on the directory retrieved from Source Safe. This means that shared project files just work. Yay.

In 2005 the problem is with solution files.

More specifically the problem is with solution files that include project files that are not in the same directory as the solution file. Creating a MyHelloWorld project will create a MyHelloWorld.sln and a MyHelloWorld.csproj in a MyHelloWorld directory. The .sln will reference the .csproj by name only and both files will have bindings in the mssccprj.scc file in the same directory and that all works without issue. But create a blank solution and add multiple existing projects to it and Visual Studio 2005 reverts to storing specific VSS bindings for the projects in the solution file.

There’s a work-around but it doesn’t work for parenting paths. Any time a solution references a project where the path to the project file starts with ‘..’, Visual Studio will revert to storing a VSS binding in the solution file. Because of this limitation it becomes convenient to have a convention that solution files generally go in the root of the code base tree.

The goal is to get the VSS bindings out of the solution file and have Visual Studio rely on the mssccprj.scc file. I haven’t found a reliable way to do this from within the IDE with existing projects but the solution (.sln) files are just text files, so I hack them.

Here’s a snippet from a .sln file. In Source Safe, the solution file is in “$/CodeBase v1”. There are two projects: a class library and a windows application. The SccProjectName<N> values are bindings to specific locations in the source safe repository. These are the bindings that need to be removed.


Global
GlobalSection(SourceCodeControl) = preSolution
SccNumberOfProjects = 3
SccLocalPath0 = .
SccProjectUniqueName1 = ClassLibrary1\\ClassLibrary1.csproj
SccProjectName1 = \u0022$/CodeBase\u0020v1/ClassLibrary1\u0022,\u0020TAAAAAAA
SccLocalPath1 = ClassLibrary1
SccProjectUniqueName2 = WindowsApplication1\\WindowsApplication1.csproj
SccProjectName2 = \u0022$/CodeBase\u0020v1/WindowsApplication1\u0022,\u0020WAAAAAAA
SccLocalPath2 = WindowsApplication1
EndGlobalSection

The .sln file can be edited to remove the SccProjectName<N> values but the SccLocalPath<N> must be updated to ‘.’ and a new property, SccProjectFilePathRelativizedFromConnection<N>, must be added with the old local path value and an appended directory separator.


Global
GlobalSection(SourceCodeControl) = preSolution
SccNumberOfProjects = 3
SccLocalPath0 = .
SccProjectUniqueName1 = ClassLibrary1\\ClassLibrary1.csproj
SccLocalPath1 = .
SccProjectFilePathRelativizedFromConnection1 = ClassLibrary1\\
SccProjectUniqueName2 = WindowsApplication1\\WindowsApplication1.csproj
SccLocalPath2 = .
SccProjectFilePathRelativizedFromConnection2 = WindowsApplication1\\
EndGlobalSection

Reference: Alin Constantin’s blog: The SAK source control strings in Visual Studio’s project files

Internet Explorer's Maximum URL Length

Note to self: Internet Explorer has a maximum URL size limit of 2,083 characters. Passing a longer URL (like maybe a URL with a dynamically built query string) to IE 7.0 will produce a cryptic error page (even with ‘Friendly’ turned off) that says “Internet Explorer cannot display the webpage” and then misleadingly suggests that the server can not be reached. Don’t waste time troubleshooting network connections or DNS lookups. Don’t be puzzled by why the same URL works in FireFox and Safari.

Open a Command Shell from the Windows Explorer

It’s hugely convenient to be able to pop open a command shell (aka “Command Prompt”) in a given directory from the Windows Explorer. The Registry can be manually edited to add context menu commands and there is a Microsoft Knowledge Base article that describes one such approach but the PowerToys collection includes an “Open Command Window Here” item that’s a little more complete. The PowerToy adds shell commands for directories and drives.

Link: How to start a command prompt in a folder in Windows Server 2003, Windows XP, and Windows 2000.

Link: Microsoft PowerToys for Windows XP.

Apparently Vista includes an “Open Command Window Here” command but only on a Shift-Right-Click in the Explorer’s right hand pane.

Incidentally I always need to change the factory defaults for the Command Prompt. I have no great sentiment for monochrome displays and can’t see any sense to using white characters on a black background. I also change the font from ‘Raster’ to ‘Lucida Console’, enable ‘Quick Edit Mode’, increase the command history, increase the height of the screen buffer, and increase the height of the window.

ASP.NET QueryString Parse Gotcha

There are times when it’s useful to throw a token parameter on a URL and have the receiving page test for the existence of the parameter. For example:

/website/page.aspx?foo

What’s significant is whether the token exists or not. It doesn’t need to have a value.

When I tried to use this trick in an ASP.NET 2.0 based application I found an issue. I expected to be able to test something like:

if (Request.QueryString["foo"] != null)

Parameters have the form <name>=<value> and I expected “foo” to be taken as a name with no value. But “foo” wasn’t a key in the QueryString collection. “foo” was interpreted as a value and the key was null.

That’s not very useful.

And I don’t recall any other web development environment behaving that way. I quickly tested against JSP and ‘classic’ ASP. Sure enough. In JSP/ASP “foo” is interpreted as a name with a null or empty value.

So how to test for “foo” in ASP.NET? I didn’t want to iterate the whole QueryString collection. That would be wasteful. I didn’t want to depend on an index position. Depending on fixed positions is brittle and counter to the whole point of using name/value pairs.

My solution? I changed my URL to

/website/page.aspx?foo=

Trouble POSTing to a classic ASP page?

POSTing to an ASP? ASP’s Request.Form requires application/x-www-form-urlencoded.

If you’re writing a web client of some sort that is trying to push data at an ASP based page via a POST request, you need to set up your HTTP headers correctly. To get ASP to parse the request body into the Request.Form collection the content-type must be “application/x-www-form-urlencoded” and the content-length must be set.

“application/x-www-form-urlencoded” is the default value for the HTML <form> element’s enctype attribute.

Problem with the Script Element for ASP Server-Side Object-Oriented JavaScript

‘Classic’ ASP (Active Server Pages) is a very quirky environment. A case in point is the handling of server side script elements.

For an ASP project I wanted to use object-oriented JavaScript (AKA JScript) on the server and I wanted to be able to reuse the JavaScript code across multiple pages. I didn’t expect to be pushing any boundaries but I quickly found myself in deep weeds.

Two parts of what I wanted to do could be considered unusual.

First, I wanted to use JavaScript on the server.

Microsoft encouraged the use of VBScript for ASP but I prefer JavaScript’s C style syntax, I think JScript’s exception handling has advantages over VBScript’s “on error”, and I like being able to use essentially the same language on both the client and the server.

Second, I wanted to write object-oriented JavaScript.

The JavaScript language is fundamentally object based yet, incongruously, object-oriented programming in JavaScript is often considered to be esoteric.

ASP server side code can be placed either within ‘<%’ and ‘%>’ delimiters (which is called a ‘scriplet’ in other environments) or within a script element with a non-standard runat="server" attribute.

In ASP reusing code across multiple pages means including shared files and in ASP 3.0 there are two ways to include a file: use a server side include directive* or a script element with a src attribute.

Link: Microsoft documentation on “Including Files in ASP Applications”

An example server side include directive:
<!-- #include file="server/include.inc" -->

The server side include directive performs a textual include of the referenced file. The included file can contain a mix of HTML and ASP scriptlets.

Since I wanted to just pull in code only, my include file would be one large scriplet.

But scriplets aren’t supported in the global.asa file. One of my goals was to avoid code duplication so I needed a way to include code without using a scriplet.

Here’s an example of a script element that includes a file server side:
<script language="javascript" runat="server" src="server/include.js"></script>

The included file can only contain code in the specified language. There’s no scriplet. That’s good.

However using script elements created a new set of issues.

In violation of the principle of least astonishment, ASP does not execute server side script in the order of its appearance within the page. The documented order of execution is:

  1. Script elements for non-default languages.
  2. Scriptlets (i.e. <% %> blocks).
  3. Script elements for the default language.

My default language was JavaScript. I would have a non-default language only if I were mixing languages, which I wasn’t.

This order of execution isn’t the whole story. Script elements for the default language appear to always be executed after the end of the page with the exception of function definitions. There’s apparently some auto-magical fix-up being performed so that functions defined in script elements can be called from scriplets.

In his blog, Fabulous Adventures in Coding, Eric Lippert wrote:

Ideally you want the server side <SCRIPT> blocks to contain only global function definitions, and the <% %> blocks to contain only “inline” code.

Link: Fabulous Adventures in Coding: VBScript and JScript Don’t Mix, at least in ASP

The apparent function definition fix-up seems to be half-baked.

In JavaScript functions are actually objects. A function named test could be equivalently defined either as:
function test() { return "test"; }

or as:
var test = function() { return "test"; }

Except in ASP the former definition would be available to scriplets and the latter definition wouldn’t be executed until the end of the page.

A bigger problem is that the order of execution has the unfortunate effect of blowing out prototype assignments.

An example will illustrate. Assume two files: circle.js and default.asp.

circle.js defines a constructor for a Circle type and assigns some additional properties to the Circle prototype. (The Circle type is borrowed from an example in the Microsoft JScript documentation.)

default.asp includes circle.js, creates an instance of a Circle, and iterates all of the members of the Circle object.

circle.js:

// circle.js

function Circle (xPoint, yPoint, radius)
{
    this.x = xPoint; // The x component of the center of the circle.
    this.y = yPoint; // The y component of the center of the circle.
    this.r = radius; // The radius of the circle.
}
Circle.prototype.pi = Math.PI;
Circle.prototype.area = function ()
    {
        return this.pi * this.r * this.r;
    }

default.asp:

<%@ language="jscript" %>
<script language="javascript" runat="server" src="server/circle.js"></script>
<html>
<head></head>
<body>
<%
var aCircle = new Circle(5, 11, 99);
    
// iterate the members/properties of the Circle object
for (var x in aCircle)
{
    Response.Write(x + " = " + aCircle[x] + "<br>");
}
%>
</body>
</html>

The output that might normally be expected:

area = function () { return this.pi * this.r * this.r; }
pi = 3.141592653589793
x = 5
y = 11
r = 99

The output that this page will actually produce:

x = 5
y = 11
r = 99

Why? Because the prototype assignments that add the pi and area members don’t execute until after the closing </html> tag.

Being able to set the prototype property is important because the prototype is an essential part of JavaScript’s support for object inheritance.

At this point some might settle for living with the evil of code duplication. That might be viable if the global.asa does little or nothing. My global.asa wasn’t going to be simple enough for that solution.

I needed a work-around. What I came up with was to create factory functions that wrapped the constructor definitions and the prototype assignments in a function scope.

Here’s a revised circle.js:

// circle.js

function createCircle(xPoint, yPoint, radius) // Circle factory
{
    function Circle (xPoint, yPoint, radius)
    {
        this.x = xPoint;
        this.y = yPoint;
        this.r = radius;
    }
    Circle.prototype.pi = Math.PI;
    Circle.prototype.area = function ()
        {
            return this.pi * this.r * this.r;
        }
    
    return new Circle(xPoint, yPoint, radius);
}

Constructing a Circle now looks like:
var aCircle = createCircle(5, 11, 99);

If there is an inheritance chain, all of the objects could be defined within the scope of the factory function and the factory function could take a selector argument to determine which type of object to construct.

The function scope trick is not a perfect solution. One issue is that a type defined within a function scope can’t be used with the instanceof operator. But the function scope trick does give me a way to get most of the benefit of leveraging JavaScript’s object oriented capabilities despite ASP’s best efforts to confound.

* The server side include directive is often confused with real SSI. It’s not SSI. It’s ASP aping the syntax of SSI. It’s a curious choice because ASP otherwise follows a very different syntax model. Another curiosity is that the documentation states “ASP includes files before executing script commands” and then belies that statement with an example of including a file multiple times by using a server side include directive within a for loop.

iisreset

An essential tool for sysadmins and developers working with Microsoft’s Internet Information Server (IIS) is the iisreset command. iisreset was introduced with IIS version 5.0 and can be used to stop and start IIS from the command line.

IIS is composed of several Win32 services. A Win32 service is a background process like a Unix daemon. NT kernel based versions of Windows (e.g. Windows NT 4.0, Windows 2000, Windows XP, Windows 2003) support a sophisticated service infrastructure.1

In versions of IIS prior to 6.0 all of the services that compose IIS are housed in inetinfo.exe. In IIS 5.0 (Win2k) and 5.1 (WinXP) these services include the following:

Service Name Protocol Description IISADMIN IIS Administration MSFTPSVC FTP File Transfer Protocol Server NNTPSVC NNTP Network News Server (available on server versions of Windows) SMTPSVC SMTP Mail Transport Server W3SVC HTTP Web Server

The command
iisreset /stop

stops all IIS services. When all the services are stopped the inetinfo process terminates.

To restart use
iisreset /start

The /start switch starts all IIS services that are configured as ‘autostart’.

Prior to 5.0, cycling the web server could be achieved via the net (stop|start) commands.

For example,
net stop iisadmin /y

is equivalent to
iisreset /stop

The other services in IIS are dependent services of IISADMIN. Stopping a superordinate service will also stop any subordinate or dependent services. So stopping IISADMIN is sufficient to stop all the IIS services. (When a service has dependent services, as is the case with IISADMIN, the net stop command will prompt for confirmation. The /y switch provides an automatic confirmation.)

net (stop|start) is still useful because it can be used more selectively than iisreset. net stop w3svc stops the web server only. There is no equivalent with iisreset. But net stop doesn’t have iisreset’s /timeout switch.

The GUI tool for managing IIS, the Internet Services Manager, has start, stop, and pause buttons. Interestingly clicking the stop button in the IIS Manager doesn’t stop the selected service. The web server, for example, will stop responding to requests but any files in use remain locked and in use.

1In the past NT Services have had a reputation for being hard to develop. Services run in a unique context and require some special considerations.

I know of a case where an engineer saved himself the trouble and built a desktop application for something that really truly was a service. It was a short-sighted decision. Some of the consequences include no autostart and no remote administration. The operations staff must curse that guy.