Setting up roles is straightforward, but doesn't provide much functionality. In order to put our roles to work, we have to add users to those roles. To do so, expand the role to which you want to add users, and right-click the "Users" item. Users are selected using the standard Windows 2000 user dialog:
To make our example work, I add myself to the "Managers" group and click "OK".
At this point, I have access to the COM+ Application. I can instantiate the secure customer component without causing any access violations. However, I am not able to access any of the methods of that component because the "Managers" role doesn't have sufficient access rights for this specific component. We can easily fix this in the properties of the component - see picture on next page.
In the security tab, set the Managers role to have access to this component. At this point, we have access to all methods of that component. Note that we can also set access rights at a higher level of granularity. We can allow access based on a specific interface (which wouldn't make a whole lot of sense in our example, since we only have the default interface) and even at the component level. A good example for that is the SalesReps role. I would like to allow sales reps to see a customer's credit limit, but allow only managers to set the limit. For this reason, I activate the SalesReps role for the GetCreditLimit() method only.
A very important point to note here is that all of these security features are purely administrative. No code is required to make these things work (although an error handler on the client side might be beneficial). However, this kind of security may not be good enough for all purposes. Let's assume we want sales reps to be able to set the credit limit, but only up to $5000. Everything beyond that has to be set by a manager. Administrative security won't do that for us, but programmatic security will.
COM+ Security is exposed to programmers as a security context. The security
context delivers a vast amount of information regarding the user of the
component, and the way that he was authenticated. Checking whether a user
belongs to a certain role is relatively easy. Here's the Visual Basic version of
the SetCreditLimit() method (which is slightly simpler than the Visual FoxPro
version, because VB automatically exposes the security context):
Here is the Visual FoxPro version. Note that it isn't necessary to set up any
references in Visual FoxPro (make sure to add the COM+ Services Type Library to
your Project References in VB!), but on the other hand, security context objects
need to be instantiated manually:
Rebuild your Visual FoxPro project (you may need to shut down your COM+ Application to avoid sharing violations).
When you instantiate the recompiled component, you can set credit limits up to $5000 without a problem, but beyond that amount, it will depend on whether or not the current user is assigned the Managers role. For test purposes, I recommend creating several user accounts so you can log on as different users and see the effects of your security settings.
Note: Make sure that the test users actually have file access rights to the DLL that the component lives in, or you will see error messages very similar to COM+ Security violations, which can be very confusing.
The role is only one possible setting to check programmatically. The security context exposes a great deal of information about the user, as well as the enforced level of security, authentication, and more. Listing all of those settings is beyond the scope of this document. For further information, we recommend reading one of the many COM+ books or the COM+ documentation at http://msdn.microsoft.com.
Authentication and Authorization
Monolithic applications usually provide their own login dialog and apply a more or less (usually less) sophisticated authentication mechanism. For component-based applications (possibly of distributed nature), this approach is no longer viable. Since components are available throughout the system, a security mechanism sitting on top of those components - perhaps a global application object - just won't work. Also, there are usability reasons that make this approach appear old-fashioned.
Modern systems don't differentiate between different applications. They simply provide a digital desktop with all kinds of tools, and once the user is logged on to that digital desktop, he should keep that identity and shouldn't be asked to log on again. Plus, what are the chances for the average developer to write a security system that outperforms the Windows 2000 security system?
So, there are compelling reasons to use the standard Windows 2000 user account for your security purposes. Besides, it is simply easier to use that mechanism than coding your own. Just consider all the features that come with that security system! The scale ranges from simple local logins, to distributed access using smart-cards.
The focus of this document isn't how to verify someone's identity or how to set up a sophisticated distributed environment. Although interesting, those topics are beyond the scope of this document. We will investigate these scenarios in future issues of CoDe Magazine. What is within the scope of the current document is a quick look at the account used by the system to access a component.
In all of the examples so far, we have used the "Interactive User" to access our components. The interactive user is the user currently logged on to the system. This setting works well for components running locally, but may not be sufficient in distributed scenarios. In the latter case, the component would always execute with the identity of whatever user happens to be logged on. If the server just sits there without having a user logged on, the system won't work at all, since no authenticated user is present.
There are two ways around this dilemma. For one, we can specify a user account to be used when executing a COM+ Application:
In this scenario, the specified user is logged on in the background (a new, invisible Windows station is instantiated), resulting in some overhead. However, this shouldn't be too significant if most of your COM+ Applications utilize the same account. A typical example is an Internet user account used to restrict certain access to your system over the Internet, yet provide the functionality you want to make available to the public.
The second option is to impersonate the user who logs on. This will result in significant overhead, because a new Windows station has to be launched for every user. This scenario is not very scalable. However, it represents the cleanest approach from a system design perspective. When security is of high priority but you expect a small number of users, this is the preferable method.
Impersonation is a very interesting topic when all the details are considered. Imagine a distributed scenario where the user, logged on at her workstation, accesses a business object on a server, that accesses another component on another server, that accesses SQL Server on a third server. The first step is straightforward. The business object will use the security credentials of the client. But what happens to the second component? Depending on the configuration, it can use the credentials of the business object or those of the client.
If we use an "Impersonate" or "Delegate" setting, the component will use the credentials of the client. Now, what happens to the SQL Server backend? It could use the client's credentials, those of the business object, or those of the component. Again, this depends on the settings. "Impersonate" only allows for one "hop", so SQL Server would not see the credentials of the client (which may very well be the desired behavior). "Delegate" is a little more powerful. It can carry credentials over an unlimited number of hops, so SQL Server could in fact see the credentials of the client.
So, this mainly becomes a question of system design. How secure does an application have to be? Do users have to be set up again on SQL Server, or can we assume that a call is secure if the business object is happy with the credentials?
Security is a growing concern, even for developers who haven't been exposed to the subject so far. Distributed environments and component-based applications are great productivity tools, but they are also more vulnerable to attacks and mistakes. COM+ Security provides an extremely flexible and extensible way to create a secure environment for your components. We think that once you explore its capabilities, you will never write a custom security system again.