It’s very important to capture trends of the sizes of your SQL Server 2005 database because it allows you to plan for future space needs, notice types of problems, and plan for time periods of heavy volume. I’ll show you the simple method that I use to capture this information.
An exampleI will capture a snapshot of the information related to the sizes of my database files; in my next article, I will analyze the information to see when my data files and log files grow the most.
Each database on the SQL Server contains information regarding the size of the database files, along with some other related information. In order for me to get to this information, I need a method to retrieve the data from the individual databases one at a time. I have two available options:
sp_spaceused: This system stored procedure will return the size statistics for the current database context in which it is running. It is very useful for returning ad hoc information regarding database or table sizes within the database; however, it is not very friendly for reporting purposes. It is possible to capture the information for each database through a script, but it would require the use of a user-defined cursor.
sp_msforeachdb: This is a very useful system stored procedure that will execute any SQL script you pass to for in each of the databases on your SQL Server instance. The stored procedure just loops through the databases, which is simple to write, but it saves you from having to do it yourself. This is the method I will use for my code to capture database file size information.
The information I want to gather and store is available in the sys.database_files system view. This gives me the size of the database files, along with some other handy information such as the state of the database, the manner in which the files grow (size or percentage), and if it is read-only. I will need to capture this information for each database.
The script below creates a table named DatabaseFiles (if it does not already exist) based upon the structure of the system view sys.database_files; it also adds a new column to capture when the record was added to the table.
IF OBJECT_ID('DatabaseFiles') IS NULL
BEGIN
SELECT TOP 0 * INTO DatabaseFiles
FROM sys.database_files
ALTER TABLE DatabaseFiles
ADD CreationDate DATETIME DEFAULT(GETDATE())
ENDNow it is time to populate the DatabaseFiles table. This script uses the sp_msforeachdb stored procedure and passes a SQL script that inserts data from the sys.database_files view into the DatabaseFiles table that I created above. If you examine the script, you will notice that I am building in the database name for each database. This is subtle, and it’s accomplished by the [?] prefix to the sys.database_files view. This code is actually executed in each database on the instance, and the name of the database is used in place of the [?] marker. Information for each database is inserted into the DatabaseFiles table with one line of code, and it is a lot easier than writing a cursor to do the same. I also added a GETDATE() call to indicate when the records were inserted into the table.
Note: This example somewhat goes against two coding standards that I am typically strict about: using SELECT * and inserting into a table without a column list. I omitted them because the SQL string that I am building would have been a lot less desirable to view. If this was code that I put into a production environment, I would have made the necessary changes accordingly.
EXECUTE sp_msforeachdb 'INSERT INTO DatabaseFiles SELECT *, GETDATE() FROM [?].sys.database_files'To make sure that all of my data was captured correctly, I’ll look at what is in the table.
SELECT * FROM DatabaseFiles
Thursday, February 21, 2008
Using the Computer Management Console’s Shared Folders snap-in
Managing open files, active shares, and user sessions can take up quite a bit of time. The Computer Management Console’s Shared Folders snap-in can make your job easier by showing remote activity and resource access on a given system.
Shared Folders will not list the documents that you are working on locally; keep this in mind if you open one of these objects on a system, and the view is empty. As with other Computer Management Console snap-ins such as Event Viewer, Shared Folders is available on all versions of Windows 2000, Windows XP, Windows Server 2003, and Windows Vista.
Components of the Shared Folders snap-in
Shared Folders includes the following three objects, which allow you to monitor systems from the comfort of your office for any system on your network.
Shares: Shows the active shares (including all administrative shares) for the system to which you are connected.
Sessions: Shows all the user sessions that are connected to your system. If someone is accessing a Windows Server 2003 resource remotely, this snap-in will show you their session. You can disconnect sessions by right-clicking a session and choosing either Disconnect Selected Session or Disconnect All Sessions.
Open Files: Shows the files on the system that are currently open and shows you which users have the files or folders open; this can be helpful in tracking down why other users cannot open certain files. When using Open Files, you can close any file that any user has open simply by right-clicking the file’s entry in the list and choosing Close Open File.
Remote connections
When accessing the Computer Management Console, you can connect remotely to other systems to view their resources. (The remote systems must be running Windows 2000 or higher.)
To connect remotely to other systems, follow these steps:
Open the Computer Management Console by right-clicking My Computer from the Windows XP Start menu. (In Windows 2000, you right-click My Computer from the desktop. In Windows Vista, you right-click Computer or enter Computer Management in the Start Menu’s Search box.)
Right-click the computer object at the top of the left pane and select Connect To Another Computer. Or, click the Action menu and select Connect To Another Computer.
Enter the name of the computer you wish to connect to and click OK.
If the desired system is available, the Computer Management Console will display the resources as available on the remote system.
Next week, I will focus on the Computer Management Console’s Local Users and Groups snap-in.
Shared Folders will not list the documents that you are working on locally; keep this in mind if you open one of these objects on a system, and the view is empty. As with other Computer Management Console snap-ins such as Event Viewer, Shared Folders is available on all versions of Windows 2000, Windows XP, Windows Server 2003, and Windows Vista.
Components of the Shared Folders snap-in
Shared Folders includes the following three objects, which allow you to monitor systems from the comfort of your office for any system on your network.
Shares: Shows the active shares (including all administrative shares) for the system to which you are connected.
Sessions: Shows all the user sessions that are connected to your system. If someone is accessing a Windows Server 2003 resource remotely, this snap-in will show you their session. You can disconnect sessions by right-clicking a session and choosing either Disconnect Selected Session or Disconnect All Sessions.
Open Files: Shows the files on the system that are currently open and shows you which users have the files or folders open; this can be helpful in tracking down why other users cannot open certain files. When using Open Files, you can close any file that any user has open simply by right-clicking the file’s entry in the list and choosing Close Open File.
Remote connections
When accessing the Computer Management Console, you can connect remotely to other systems to view their resources. (The remote systems must be running Windows 2000 or higher.)
To connect remotely to other systems, follow these steps:
Open the Computer Management Console by right-clicking My Computer from the Windows XP Start menu. (In Windows 2000, you right-click My Computer from the desktop. In Windows Vista, you right-click Computer or enter Computer Management in the Start Menu’s Search box.)
Right-click the computer object at the top of the left pane and select Connect To Another Computer. Or, click the Action menu and select Connect To Another Computer.
Enter the name of the computer you wish to connect to and click OK.
If the desired system is available, the Computer Management Console will display the resources as available on the remote system.
Next week, I will focus on the Computer Management Console’s Local Users and Groups snap-in.
Create your own special characters in Windows XP
If you’ve ever wanted to create your own font or maybe just a special character — for example, a character showing your initials for when you wish to approve documents with your “signature” — you can easily create your own special characters using a hidden Windows XP tool called the Private Character Editor. Here’s how:
Press [Windows]R to open the Run dialog box.
Type eudcedit in the Open text box and click OK.
When the Private Character Editor launches, you’ll see the Select Code dialog box. Click OK.
A user interface that looks and works very much like Paint will appear. From this, you may use standard tools to create your characters.
When you finish, select the Save Character command on the Edit menu.
Once you save your new character, you can access it using the Character Map tool. Here’s how:
Press [Windows]R to open the Run dialog box.
Type charmap in the Open text box and click OK.
When the Character Map appears, select the Font drop-down list and select All Fonts (Private Characters).
Select your character, click the Select button, and then click the Copy button.
You can now paste your font character in any document that you want.
Press [Windows]R to open the Run dialog box.
Type eudcedit in the Open text box and click OK.
When the Private Character Editor launches, you’ll see the Select Code dialog box. Click OK.
A user interface that looks and works very much like Paint will appear. From this, you may use standard tools to create your characters.
When you finish, select the Save Character command on the Edit menu.
Once you save your new character, you can access it using the Character Map tool. Here’s how:
Press [Windows]R to open the Run dialog box.
Type charmap in the Open text box and click OK.
When the Character Map appears, select the Font drop-down list and select All Fonts (Private Characters).
Select your character, click the Select button, and then click the Copy button.
You can now paste your font character in any document that you want.
Sunday, February 10, 2008
Enterprise considerations for Microsoft Network Access Protection
Having a MS-NAP implementation in place will provide your network an extra level of protection at the entry point. There are certainly networks that need the maximum level of security for every point of connectivity; however, only the business or your technology situation can determine what you need from the perspective of network access protection. The MS-NAP implementation uses many different communication mechanisms if fully implemented. A strong point for MS-NAP is that the MS-NAP implementation can be utilized with some or all of the features and roles. In this article, we'll take a look at some of things you need to take into consideration from an enterprise perspective.
Enforcement types for MS-NAP
If you are considering MS-NAP for your environment, you cannot invest enough time in the planning and testing phases. Deciding on the best enforcement type for a policy is critically important. The means of enforcing MS-NAP are varied in their functionality and complexity.
Enforcement types
The MS-NAP implementation can enforce the compliance policy through these four mechanisms:
VPN: The VPN server relays the policy from the Network Policy Server (NPS) to the requesting client and performs the validation. This is not to be confused with Windows Server 2003's Network Access Quarantine Control feature.
DHCP: The DHCP server interacts with the policies from the NPS to determine the client's compliance.
IPSec: The IPSec enforcement of MS-NAP is Microsoft's strongest offering for network access protection. It enforces the policy and configures the systems out of compliance with a limited access local IP security policy for remediation.
802.1X: The MS-NAP client authenticates over an 802.1X authenticated network and is the best solution when integrating hardware from other vendors. Luckily, the 802.1X authentication protocol was developed jointly by Microsoft, Cisco, HP, Trapeze, and Enterasys.
Each enforcement type will direct the client that is out of compliance to the remediation network where a resolution should be able to occur before accessing the desired network. The remediation network should be given some thorough planning. Making the remediation network a place where clients (managed or unmanaged) can gain the requisite updates or programs without support staff intervention will be critical in making the entire MS-NAP implementation a success. Choosing an enforcement method is an important first step in a successful implementation.
Planning what can happen on the remediation network is very important as well. Question whether updates be accessed from this network; if anti-virus updates/installations be accessed there; and, most importantly, whether the users perform the required updates automatically or without involving the client support staff.
Network Policy Server (NPS) mastery
In planning a MS-NAP implementation, a deep-level understanding of the NPS role of Windows Server 2008 should be reached. This server role will determine where systems will go based on their configuration. This is especially important because this server role touches other server roles or equipment depending on the enforcement mechanism selected. The NPS role also acts as a RADIUS server for the MS-NAP clients.
Real-world administration effort and support
Many network administrators are overworked and can have a difficult time perceiving a time where they could allocate the time to properly plan a network access protection system much less fully test and implement such a solution. The common response from a quick, unscientific survey of network administrators is "It would be nice, but I don't have the time" for a network access protection solution. Regardless of it being a Microsoft or a networking company solution, the responses are fairly consistent.
From an ongoing support perspective, the MS-NAP implementation can go one way or the other. If the remediation network has a way for the users to become compliant and a robust, intuitive way of doing such, the support effort will be minimized for ongoing access to networks from systems that have dipped out of compliance.
Networking hardware support
If the 802.1X enforcement method is selected, a unique challenge is presented. This method is unique because it would require maintaining support for the MS-NAP implementation from a networking hardware and server operating system perspective. While the implementations offered by the networking hardware vendors offer 802.1X authentication for an individual port, it takes an additional administration effort to ensure end-to-end compatibility.
New services on clients and domain group policy objects
For the client elements using the MS-NAP implementation, there are new services and local configuration elements that are required to utilize the functionality. Pushing these configuration elements to managed systems through an Active Directory domain GPO is the best way to deploy to large numbers of existing systems. The new configuration elements for the MS-NAP implementation are not available in Active Directory domains running at Windows Server 2003 level, but are available for Windows Server 2008 level domains. There are other ways to configure the new services for clients, but it would be optimal to be native in the domain group policy editor and link the new GPO to an OU or a domain.
It is not clear what implementation configuration would be required for Windows XP clients since Service Pack 3 is not yet available; nor is it clear how a Windows XP MS-NAP client would be managed -- if at all possible -- from a Windows Server 2008 functionality level Active Directory domain.
Enforcement types for MS-NAP
If you are considering MS-NAP for your environment, you cannot invest enough time in the planning and testing phases. Deciding on the best enforcement type for a policy is critically important. The means of enforcing MS-NAP are varied in their functionality and complexity.
Enforcement types
The MS-NAP implementation can enforce the compliance policy through these four mechanisms:
VPN: The VPN server relays the policy from the Network Policy Server (NPS) to the requesting client and performs the validation. This is not to be confused with Windows Server 2003's Network Access Quarantine Control feature.
DHCP: The DHCP server interacts with the policies from the NPS to determine the client's compliance.
IPSec: The IPSec enforcement of MS-NAP is Microsoft's strongest offering for network access protection. It enforces the policy and configures the systems out of compliance with a limited access local IP security policy for remediation.
802.1X: The MS-NAP client authenticates over an 802.1X authenticated network and is the best solution when integrating hardware from other vendors. Luckily, the 802.1X authentication protocol was developed jointly by Microsoft, Cisco, HP, Trapeze, and Enterasys.
Each enforcement type will direct the client that is out of compliance to the remediation network where a resolution should be able to occur before accessing the desired network. The remediation network should be given some thorough planning. Making the remediation network a place where clients (managed or unmanaged) can gain the requisite updates or programs without support staff intervention will be critical in making the entire MS-NAP implementation a success. Choosing an enforcement method is an important first step in a successful implementation.
Planning what can happen on the remediation network is very important as well. Question whether updates be accessed from this network; if anti-virus updates/installations be accessed there; and, most importantly, whether the users perform the required updates automatically or without involving the client support staff.
Network Policy Server (NPS) mastery
In planning a MS-NAP implementation, a deep-level understanding of the NPS role of Windows Server 2008 should be reached. This server role will determine where systems will go based on their configuration. This is especially important because this server role touches other server roles or equipment depending on the enforcement mechanism selected. The NPS role also acts as a RADIUS server for the MS-NAP clients.
Real-world administration effort and support
Many network administrators are overworked and can have a difficult time perceiving a time where they could allocate the time to properly plan a network access protection system much less fully test and implement such a solution. The common response from a quick, unscientific survey of network administrators is "It would be nice, but I don't have the time" for a network access protection solution. Regardless of it being a Microsoft or a networking company solution, the responses are fairly consistent.
From an ongoing support perspective, the MS-NAP implementation can go one way or the other. If the remediation network has a way for the users to become compliant and a robust, intuitive way of doing such, the support effort will be minimized for ongoing access to networks from systems that have dipped out of compliance.
Networking hardware support
If the 802.1X enforcement method is selected, a unique challenge is presented. This method is unique because it would require maintaining support for the MS-NAP implementation from a networking hardware and server operating system perspective. While the implementations offered by the networking hardware vendors offer 802.1X authentication for an individual port, it takes an additional administration effort to ensure end-to-end compatibility.
New services on clients and domain group policy objects
For the client elements using the MS-NAP implementation, there are new services and local configuration elements that are required to utilize the functionality. Pushing these configuration elements to managed systems through an Active Directory domain GPO is the best way to deploy to large numbers of existing systems. The new configuration elements for the MS-NAP implementation are not available in Active Directory domains running at Windows Server 2003 level, but are available for Windows Server 2008 level domains. There are other ways to configure the new services for clients, but it would be optimal to be native in the domain group policy editor and link the new GPO to an OU or a domain.
It is not clear what implementation configuration would be required for Windows XP clients since Service Pack 3 is not yet available; nor is it clear how a Windows XP MS-NAP client would be managed -- if at all possible -- from a Windows Server 2008 functionality level Active Directory domain.
Cisco's NAC hardware explained
Cisco Network Admission Control (NAC) is a system to enforce the security policy of your company on all devices attempting network access. The Cisco NAC solution is made up of many different pieces of hardware, software, and services; this article will explain its many pieces.
What hardware makes up Cisco's NAC solution?
On Cisco's network security solutions Web page, you'll find the following list of Cisco technologies, all of which play a part in the complete Cisco NAC solution:
Advanced Services for Network Security
Cisco Security Agent (CSA)
Cisco Security Monitoring, Analysis and Response System (MARS)
Cisco Trust Agent 2.0 (CTA)
Cisco Secure Access Control Server for Windows (ACS)
Cisco Secure Access Control Server Solution Engine (ACS)
Cisco Works Interface Configuration Manager (ICM)
Cisco Works Security Information Management Solution (CW-SIMS)
NAC-enabled routers
Router security
Cisco VPN 3000 Series Concentrators
Cisco Unified Wireless Network
Cisco Catalyst switches
Let's discuss some of the more critical pieces of Cisco's NAC solution.
Cisco NAC-enabled routers
The recently released Cisco router NAT module enforces NAC at the remote branch locations or ancillary buildings of a campus. Apart from that, the NAC router module also improves the overall security of the network by making sure that all incoming users and devices comply with security policies.
Additionally, the Cisco NAC router module (part # NME-NAC-K9) brings the capabilities of Cisco NAC Appliance Server to Cisco 2800 and 3800 Series Integrated Services Routers. This module helps network administrators by not having to deploy NAC appliances across the board and it helps to consolidate the administrative tasks into fewer boxes.
Amazingly, this module is actually a 1 GHz Intel Celeron PC, with 512 MB RAM, 64 MB of Compact Flash, and an 80 GB SATA hard drive. All that fits onto a single 1 pound module that slides into a router and enforces your security policies. This module requires a 2800 or 3800 series router running IOS 12.4(11)T or later.
Cisco NAC Appliance
The single most popular piece of the Cisco NAC solution has been the Cisco NAC Appliance. As evident from the name itself, Cisco NAC Appliance is an appliance-based solution that offers fast deployment, policy management, and enforcement of security policies.
With the Cisco NAC Appliance, you can opt for an in-band or out-of-band solution. The in-band solution is for smaller deployments. As your network grows into a more campus environment, you may not be able to keep in the in-band design. In that case, you can move to the out-of-band deployment scenario.
Here are some advantages of the Cisco NAC Appliance:
Identity: At the point of authentication, the Cisco NAC Appliance recognizes users, as well as their devices and their responsibility in the network.
Compliance: Cisco NAC Appliance also takes into account whether machines are compliant with security policies or not. This includes enforcing operating system updates, antivirus definitions, firewall settings, and antispyware software definitions.
Quarantine: If the machines attempting to gain access don't meet the policies of the network, the Cisco NAC Appliance can quarantine these machines and bring them into compliance (by applying patches or changing settings), before releasing them onto the network.
For more information about the Cisco NAC Appliance, see the Cisco NAC Appliance datasheet.
Cisco Secure Access Control Server (ACS)
The Cisco ACS Server could be called the "brain" of the Cisco NAC solution. It is here that users' credentials are checked to see if they are valid, policies are sent back to be enforced, and activities are logged. The ACS server is called an AAA Server because it performs authentication, authorization, and accounting.
This server runs on an existing Windows server in your organization and can use other existing databases in your organization to verify users' credentials. For example, most companies have ACS point toward their Windows Active Directory (AD) system to look up credentials. If those credentials are valid, then ACS can enforce network authorization polices on those users, with the help of the network hardware: NAC Appliance, Router NAC module, or ASA/PIX firewalls.
Cisco Security Agent (CSA)
Cisco CSA is a software client that is run on every machine in an organization. These clients talk to a centralized policy server. Together, these software applications know what software and activities that occur on each PC in the organization are or are not "normal". The CSA agent may alert on or block certain activities that it sees as abnormal.
When compared to anti-virus software that depends on definition updates to stay current, Cisco touts that the CSA never needs updating because it is constantly "learning" and monitoring activities, not definitions of viruses.
For more information about the Cisco CSA solution, see the Cisco CSA datasheet.
Cisco Trust Agent (CTA)
You can think of the Cisco Trust Agent as the "NAC Client". The CTA runs on each PC in the organization. It talks to the NAC Appliance, for example, to tell it about the state of the device attempting to access the network. For example, the CTA reports the version of the OS, patch level, the AV definition level, the firewall status, and more. According to Cisco, the CTA "interrogates devices." You can obtain CTA free of charge from Cisco Systems.
Cisco Works Security Information Management Solution (CW-SIMS)
The Cisco Works Security Information Management Solution (CW-SIMS) in the centralized repository that all Cisco devices use for security logging and other information. According to Cisco, this application "integrates, correlates, and analyzes security event data from the enterprise network to improve visibility and provide actionable intelligence for strengthening an organization's security."
With so many security devices in your network, one application has to try to correlate all the logs and security information that is generated. According to Cisco, here are the features that the CW-SIMS offers:
Comprehensive Correlation: Statistical, rules-based, and vulnerability correlation of events as they happen, in real time, across all integrated Cisco network devices.
Threat Visualization: See a visual status and generate reports of all the security events as they happen across your network.
Incident Resolution Management: SIMs integrates with common helpdesk packages to track security events until resolution.
Integrated Knowledge Base: SIMS can be a source of knowledge about security issues and how they are resolved.
Real-Time Notification: SIMS can notify security admins, in real time, when events occur.
For more information about the Cisco CW-SIMS solution, see the Cisco SW-SIMS datasheet.
Cisco Security Monitoring, Analysis, and Response System (MARS)
While MARS may seem similar to CW-SIMS, it is quite different. MARS actually understands the configuration and topology of your network. You can think of MARS as a "virtual security admin" for your network -- working while you sleep.
MARS uses NetFlow data from Cisco routers to have a real-time understanding of network traffic. It knows what is considered normal and what is not; this is called behavioral analysis. With behavioral analysis, MARS can stop abnormal network traffic. MARS has over 150 audit compliance templates ,and will make recommendations on how to remediate threats to your network.
MARS is actually an appliance that you install on your network. This appliance comes in a variety of sizes and license levels based on the size of your network. Cisco Security MARS and Cisco Security Manager are part of the Cisco Security Management Suite.
In summary
To be a complete solution that can fulfill the Cisco Self-Defending Network framework, the hardware and software of Cisco's NAC solution must integrate well. With nine or more different pieces of hardware and software related to NAC, the challenge of acquiring (i.e., affording), learning to configure, deploying, and monitoring these solutions can be a large task for any organization. While having the centralized software applications like CW-SIMS and MARS can really bring it all together, those applications will take time, effort, and expertise to master. For this reason, I can relate to anyone who says that deploying a security solution is difficult.
In this article, I've attempted to clarify the purpose of the different NAC security solutions offered by Cisco today; with this information, I hope that your quest for strong network security can be realized.
What hardware makes up Cisco's NAC solution?
On Cisco's network security solutions Web page, you'll find the following list of Cisco technologies, all of which play a part in the complete Cisco NAC solution:
Advanced Services for Network Security
Cisco Security Agent (CSA)
Cisco Security Monitoring, Analysis and Response System (MARS)
Cisco Trust Agent 2.0 (CTA)
Cisco Secure Access Control Server for Windows (ACS)
Cisco Secure Access Control Server Solution Engine (ACS)
Cisco Works Interface Configuration Manager (ICM)
Cisco Works Security Information Management Solution (CW-SIMS)
NAC-enabled routers
Router security
Cisco VPN 3000 Series Concentrators
Cisco Unified Wireless Network
Cisco Catalyst switches
Let's discuss some of the more critical pieces of Cisco's NAC solution.
Cisco NAC-enabled routers
The recently released Cisco router NAT module enforces NAC at the remote branch locations or ancillary buildings of a campus. Apart from that, the NAC router module also improves the overall security of the network by making sure that all incoming users and devices comply with security policies.
Additionally, the Cisco NAC router module (part # NME-NAC-K9) brings the capabilities of Cisco NAC Appliance Server to Cisco 2800 and 3800 Series Integrated Services Routers. This module helps network administrators by not having to deploy NAC appliances across the board and it helps to consolidate the administrative tasks into fewer boxes.
Amazingly, this module is actually a 1 GHz Intel Celeron PC, with 512 MB RAM, 64 MB of Compact Flash, and an 80 GB SATA hard drive. All that fits onto a single 1 pound module that slides into a router and enforces your security policies. This module requires a 2800 or 3800 series router running IOS 12.4(11)T or later.
Cisco NAC Appliance
The single most popular piece of the Cisco NAC solution has been the Cisco NAC Appliance. As evident from the name itself, Cisco NAC Appliance is an appliance-based solution that offers fast deployment, policy management, and enforcement of security policies.
With the Cisco NAC Appliance, you can opt for an in-band or out-of-band solution. The in-band solution is for smaller deployments. As your network grows into a more campus environment, you may not be able to keep in the in-band design. In that case, you can move to the out-of-band deployment scenario.
Here are some advantages of the Cisco NAC Appliance:
Identity: At the point of authentication, the Cisco NAC Appliance recognizes users, as well as their devices and their responsibility in the network.
Compliance: Cisco NAC Appliance also takes into account whether machines are compliant with security policies or not. This includes enforcing operating system updates, antivirus definitions, firewall settings, and antispyware software definitions.
Quarantine: If the machines attempting to gain access don't meet the policies of the network, the Cisco NAC Appliance can quarantine these machines and bring them into compliance (by applying patches or changing settings), before releasing them onto the network.
For more information about the Cisco NAC Appliance, see the Cisco NAC Appliance datasheet.
Cisco Secure Access Control Server (ACS)
The Cisco ACS Server could be called the "brain" of the Cisco NAC solution. It is here that users' credentials are checked to see if they are valid, policies are sent back to be enforced, and activities are logged. The ACS server is called an AAA Server because it performs authentication, authorization, and accounting.
This server runs on an existing Windows server in your organization and can use other existing databases in your organization to verify users' credentials. For example, most companies have ACS point toward their Windows Active Directory (AD) system to look up credentials. If those credentials are valid, then ACS can enforce network authorization polices on those users, with the help of the network hardware: NAC Appliance, Router NAC module, or ASA/PIX firewalls.
Cisco Security Agent (CSA)
Cisco CSA is a software client that is run on every machine in an organization. These clients talk to a centralized policy server. Together, these software applications know what software and activities that occur on each PC in the organization are or are not "normal". The CSA agent may alert on or block certain activities that it sees as abnormal.
When compared to anti-virus software that depends on definition updates to stay current, Cisco touts that the CSA never needs updating because it is constantly "learning" and monitoring activities, not definitions of viruses.
For more information about the Cisco CSA solution, see the Cisco CSA datasheet.
Cisco Trust Agent (CTA)
You can think of the Cisco Trust Agent as the "NAC Client". The CTA runs on each PC in the organization. It talks to the NAC Appliance, for example, to tell it about the state of the device attempting to access the network. For example, the CTA reports the version of the OS, patch level, the AV definition level, the firewall status, and more. According to Cisco, the CTA "interrogates devices." You can obtain CTA free of charge from Cisco Systems.
Cisco Works Security Information Management Solution (CW-SIMS)
The Cisco Works Security Information Management Solution (CW-SIMS) in the centralized repository that all Cisco devices use for security logging and other information. According to Cisco, this application "integrates, correlates, and analyzes security event data from the enterprise network to improve visibility and provide actionable intelligence for strengthening an organization's security."
With so many security devices in your network, one application has to try to correlate all the logs and security information that is generated. According to Cisco, here are the features that the CW-SIMS offers:
Comprehensive Correlation: Statistical, rules-based, and vulnerability correlation of events as they happen, in real time, across all integrated Cisco network devices.
Threat Visualization: See a visual status and generate reports of all the security events as they happen across your network.
Incident Resolution Management: SIMs integrates with common helpdesk packages to track security events until resolution.
Integrated Knowledge Base: SIMS can be a source of knowledge about security issues and how they are resolved.
Real-Time Notification: SIMS can notify security admins, in real time, when events occur.
For more information about the Cisco CW-SIMS solution, see the Cisco SW-SIMS datasheet.
Cisco Security Monitoring, Analysis, and Response System (MARS)
While MARS may seem similar to CW-SIMS, it is quite different. MARS actually understands the configuration and topology of your network. You can think of MARS as a "virtual security admin" for your network -- working while you sleep.
MARS uses NetFlow data from Cisco routers to have a real-time understanding of network traffic. It knows what is considered normal and what is not; this is called behavioral analysis. With behavioral analysis, MARS can stop abnormal network traffic. MARS has over 150 audit compliance templates ,and will make recommendations on how to remediate threats to your network.
MARS is actually an appliance that you install on your network. This appliance comes in a variety of sizes and license levels based on the size of your network. Cisco Security MARS and Cisco Security Manager are part of the Cisco Security Management Suite.
In summary
To be a complete solution that can fulfill the Cisco Self-Defending Network framework, the hardware and software of Cisco's NAC solution must integrate well. With nine or more different pieces of hardware and software related to NAC, the challenge of acquiring (i.e., affording), learning to configure, deploying, and monitoring these solutions can be a large task for any organization. While having the centralized software applications like CW-SIMS and MARS can really bring it all together, those applications will take time, effort, and expertise to master. For this reason, I can relate to anyone who says that deploying a security solution is difficult.
In this article, I've attempted to clarify the purpose of the different NAC security solutions offered by Cisco today; with this information, I hope that your quest for strong network security can be realized.
Finding dependencies in SQL Server 2005
Any time you need to modify objects in your SQL Server 2005 database, the objects that are dependent upon those objects are a concern. You don’t want to remove columns from tables, procedures, views, or tables if there are objects dependent upon them that are being used.
This tutorial will show how you can write a procedure that will look up all of the objects that are dependent upon other objects.
How to write the procedureTo start a dependency chain, I create a table and then create some objects that will depend upon that table. Below is a script to create my SalesHistory and load some data into it:
IF OBJECT_ID('SalesHistory')>0
DROP TABLE SalesHistory;
GO
CREATE TABLE [dbo].[SalesHistory]
(
[SaleID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
[Product] [char](150) NULL,
[SaleDate] [datetime] NULL,
[SalePrice] [money] NULL
)
GO
DECLARE @i SMALLINT
SET @i = 1
WHILE (@i <=100)
BEGIN
INSERT INTO SalesHistory
(Product, SaleDate, SalePrice)
VALUES
('Computer', DATEADD(mm, @i, '3/11/1919'), DATEPART(ms, GETDATE()) + (@i + 57))
INSERT INTO SalesHistory
(Product, SaleDate, SalePrice)
VALUES
('BigScreen', DATEADD(mm, @i, '3/11/1927'), DATEPART(ms, GETDATE()) + (@i + 13))
INSERT INTO SalesHistory
(Product, SaleDate, SalePrice)
VALUES
('PoolTable', DATEADD(mm, @i, '3/11/1908'), DATEPART(ms, GETDATE()) + (@i + 29))
SET @i = @i + 1
ENDI’ll create a couple of objects that are dependent upon the SalesHistory table. This view uses the DENSE_RANK ranking function to return the sales rank of each product based on when the product was entered into the table. This view is directly dependent upon the SalesHistory table.
CREATE VIEW vw_SalesHistory
AS
SELECT SaleRank = DENSE_RANK() OVER (PARTITION BY Product ORDER BY SaleID ASC), *
FROM SalesHistory
GOThe stored procedure returns the total sales for the Computer product group. This procedure uses the view that I just created, so it is dependent upon that view, which is dependent upon the SalesHistory table. In a sense, this creates a dependency chain.
CREATE PROCEDURE usp_GetTotalComputerSales
(
@TotalSales MONEY OUTPUT
)
AS
BEGIN
SELECT @TotalSales = SUM(SalePrice)
FROM vw_SalesHistory
WHERE Product = 'Computer'
END
GOHere is the code to create the system stored procedure for finding object dependencies:
USE master
GO
CREATE PROCEDURE sp_FindDependencies
(
@ObjectName SYSNAME,
@ObjectType VARCHAR(5) = NULL
)
AS
BEGIN
DECLARE @ObjectID AS BIGINT
SELECT TOP(1) @ObjectID = object_id
FROM sys.objects
WHERE name = @ObjectName
AND type = ISNULL(@ObjectType, type)
SET NOCOUNT ON ;
WITH DependentObjectCTE (DependentObjectID, DependentObjectName, ReferencedObjectName, ReferencedObjectID)
AS
(
SELECT DISTINCT
sd.object_id,
OBJECT_NAME(sd.object_id),
ReferencedObject = OBJECT_NAME(sd.referenced_major_id),
ReferencedObjectID = sd.referenced_major_id
FROM
sys.sql_dependencies sd
JOIN sys.objects so ON sd.referenced_major_id = so.object_id
WHERE
sd.referenced_major_id = @ObjectID
UNION ALL
SELECT
sd.object_id,
OBJECT_NAME(sd.object_id),
OBJECT_NAME(referenced_major_id),
object_id
FROM
sys.sql_dependencies sd
JOIN DependentObjectCTE do ON sd.referenced_major_id = do.DependentObjectID
WHERE
sd.referenced_major_id <> sd.object_id
)
SELECT DISTINCT
DependentObjectName
FROM
DependentObjectCTE c
ENDThis procedure uses a Common Table Expression (CTE) with recursion to walk down the dependency chain to get to all of the objects that are dependent on the object passed into the procedure. The main source of data comes from the system view sys.sql_dependencies, which contains dependency information for all of your objects in the database.
Note: There are exceptions to this table. SQL Server 2005 will only place data into the sys.sql_dependencies view if it is able to at the creation of the object. If the database is not able to add a dependency, it will let you know at the time the object is created.
I want to mark the stored procedure as a system stored procedure so I can call it for any object in any database.
EXECUTE sp_ms_marksystemobject 'sp_FindDependencies'Now I can call my new system stored procedure to find any objects that are dependent upon the SalesHistory table that I just created.
EXECUTE sp_FindDependencies 'SalesHistory'I get the results that I expect from the procedure. The following objects are returned:
usp_GetTotalComputerSales
vw_SalesHistoryThe view vw_SalesHistory is returned because it is directly dependent upon the SalesHistory table. The procedure usp_GetTotalComputerSales is returned because it is dependent upon the view vw_SalesHistory, which in turn is dependent upon the SalesHistory table.
Use with cautionThe ability to view objects that are dependent upon other objects (e.g., views that use tables, procedures that use views) is useful when you need to alter or remove certain objects. Be extra careful when you modify objects that other objects may depend on.
This tutorial will show how you can write a procedure that will look up all of the objects that are dependent upon other objects.
How to write the procedureTo start a dependency chain, I create a table and then create some objects that will depend upon that table. Below is a script to create my SalesHistory and load some data into it:
IF OBJECT_ID('SalesHistory')>0
DROP TABLE SalesHistory;
GO
CREATE TABLE [dbo].[SalesHistory]
(
[SaleID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
[Product] [char](150) NULL,
[SaleDate] [datetime] NULL,
[SalePrice] [money] NULL
)
GO
DECLARE @i SMALLINT
SET @i = 1
WHILE (@i <=100)
BEGIN
INSERT INTO SalesHistory
(Product, SaleDate, SalePrice)
VALUES
('Computer', DATEADD(mm, @i, '3/11/1919'), DATEPART(ms, GETDATE()) + (@i + 57))
INSERT INTO SalesHistory
(Product, SaleDate, SalePrice)
VALUES
('BigScreen', DATEADD(mm, @i, '3/11/1927'), DATEPART(ms, GETDATE()) + (@i + 13))
INSERT INTO SalesHistory
(Product, SaleDate, SalePrice)
VALUES
('PoolTable', DATEADD(mm, @i, '3/11/1908'), DATEPART(ms, GETDATE()) + (@i + 29))
SET @i = @i + 1
ENDI’ll create a couple of objects that are dependent upon the SalesHistory table. This view uses the DENSE_RANK ranking function to return the sales rank of each product based on when the product was entered into the table. This view is directly dependent upon the SalesHistory table.
CREATE VIEW vw_SalesHistory
AS
SELECT SaleRank = DENSE_RANK() OVER (PARTITION BY Product ORDER BY SaleID ASC), *
FROM SalesHistory
GOThe stored procedure returns the total sales for the Computer product group. This procedure uses the view that I just created, so it is dependent upon that view, which is dependent upon the SalesHistory table. In a sense, this creates a dependency chain.
CREATE PROCEDURE usp_GetTotalComputerSales
(
@TotalSales MONEY OUTPUT
)
AS
BEGIN
SELECT @TotalSales = SUM(SalePrice)
FROM vw_SalesHistory
WHERE Product = 'Computer'
END
GOHere is the code to create the system stored procedure for finding object dependencies:
USE master
GO
CREATE PROCEDURE sp_FindDependencies
(
@ObjectName SYSNAME,
@ObjectType VARCHAR(5) = NULL
)
AS
BEGIN
DECLARE @ObjectID AS BIGINT
SELECT TOP(1) @ObjectID = object_id
FROM sys.objects
WHERE name = @ObjectName
AND type = ISNULL(@ObjectType, type)
SET NOCOUNT ON ;
WITH DependentObjectCTE (DependentObjectID, DependentObjectName, ReferencedObjectName, ReferencedObjectID)
AS
(
SELECT DISTINCT
sd.object_id,
OBJECT_NAME(sd.object_id),
ReferencedObject = OBJECT_NAME(sd.referenced_major_id),
ReferencedObjectID = sd.referenced_major_id
FROM
sys.sql_dependencies sd
JOIN sys.objects so ON sd.referenced_major_id = so.object_id
WHERE
sd.referenced_major_id = @ObjectID
UNION ALL
SELECT
sd.object_id,
OBJECT_NAME(sd.object_id),
OBJECT_NAME(referenced_major_id),
object_id
FROM
sys.sql_dependencies sd
JOIN DependentObjectCTE do ON sd.referenced_major_id = do.DependentObjectID
WHERE
sd.referenced_major_id <> sd.object_id
)
SELECT DISTINCT
DependentObjectName
FROM
DependentObjectCTE c
ENDThis procedure uses a Common Table Expression (CTE) with recursion to walk down the dependency chain to get to all of the objects that are dependent on the object passed into the procedure. The main source of data comes from the system view sys.sql_dependencies, which contains dependency information for all of your objects in the database.
Note: There are exceptions to this table. SQL Server 2005 will only place data into the sys.sql_dependencies view if it is able to at the creation of the object. If the database is not able to add a dependency, it will let you know at the time the object is created.
I want to mark the stored procedure as a system stored procedure so I can call it for any object in any database.
EXECUTE sp_ms_marksystemobject 'sp_FindDependencies'Now I can call my new system stored procedure to find any objects that are dependent upon the SalesHistory table that I just created.
EXECUTE sp_FindDependencies 'SalesHistory'I get the results that I expect from the procedure. The following objects are returned:
usp_GetTotalComputerSales
vw_SalesHistoryThe view vw_SalesHistory is returned because it is directly dependent upon the SalesHistory table. The procedure usp_GetTotalComputerSales is returned because it is dependent upon the view vw_SalesHistory, which in turn is dependent upon the SalesHistory table.
Use with cautionThe ability to view objects that are dependent upon other objects (e.g., views that use tables, procedures that use views) is useful when you need to alter or remove certain objects. Be extra careful when you modify objects that other objects may depend on.
Tuesday, January 1, 2008
How do I... Install Windows Vista in a dual-boot configuration along with Windows XP?
Are you really excited about the prospect of experimenting with the new features in the Windows Vista operating system, but are not yet ready to give up your existing Windows XP installation? For instance, you may be on the fence, because you're not 100 percent sure that all your existing hardware and software will work in Vista and you still need them to get your work done.
If so, then you may be the perfect candidate for a dual-boot configuration. With this type of configuration, you can easily experiment with Windows Vista and still use Windows XP. In other words, you get to have your cake and eat it too.
In this article, I'll discuss some of the options you'll need to consider as you begin thinking about and planning for adding Windows Vista to your existing system in a dual-boot configuration. I’ll then walk you step by step through the entire procedure.
The location options
In order to install Windows Vista in a dual-boot configuration along with Windows XP, you need to have either a second partition on your existing hard disk or a second hard disk in your system. To give yourself enough room to experiment, you should have at least 20 GB and preferably 40 GB of space available on either the second partition or on the second hard disk.
If you don't have enough available space on your existing hard disk for a second partition, then you'll need to connect a second hard disk to your system. If you do have enough available space on your exiting hard disk for a second partition, then you'll need to obtain a partitioning software package. I recommend, Symantec’s Norton PartitionMagic only because I’ve used PartitionMagic for years. However, there are other partitioning software packages that I’ve heard are just as good, such as Acronis Disk Director or VCOM Partition Commander Professional.
Of course, detailed instructions on connecting a second hard disk or partitioning your existing hard disk are beyond the scope of this article. However, in either case, the second hard disk or the second partition must be formatted with NTFS before you begin the installation operation. If you add a second partition to your existing hard disk via a partitioning software package, you will be able to format it as NTFS at the same time as you create the partition. If you're installing a second hard disk, the easiest way to format it as NTFS is from within Windows XP’s Disk Manager, which you can quickly access by pressing [Windows]+R to access the Run dialog box and typing diskmgmt.msc in the Open text box.
The installation options
You can approach the dual-boot installation operation in one of two ways -- by cold booting from the Windows Vista DVD or by inserting the Windows Vista DVD while Windows XP is running. As you can imagine, you'll encounter slightly different introductory screens depending on which approach you use, but once you get stared the operation is essentially the same.
While both methods will produce the same result, I prefer the cold booting from the DVD method. The main reason is that you don't have to worry about any interference from antivirus/antispyware/firewall software on your existing Windows XP installation.
Performing the installation
Once you have your second partition or second hard disk operational, just insert your Windows Vista DVD, restart the system, and boot from the DVD. Once the system boots from the DVD, Windows Vista’s Setup will begin loading and will display the screen shown in Figure A.
Figure A:

Windows Vista’s Setup will take a few moments to load files before the installation actually commences.
In a few moments, you’ll see the screen that prompts you to choose the regional and language options, as shown in Figure B. As you can see, the default settings are for U.S. and English and if that’s you, you can just click Next to move on.
Figure B:

The default settings on the regional and language screen are for the U.S. and English.
On the next screen, you’ll be prompted to begin the installation procedure, as shown in Figure C. To begin, just click the Install Now button
Figure C:
To get started, click the Install Now button.
In the next screen, you’ll be prompted to type in your product key for activation, as shown in Figure D. By default, the Automatically Activate Windows When I’m online check box is selected; however, you’ll notice that I’ve cleared it. The main reason that I’ve done so here is that while writing this article, I’ve experimented over and over with this installation procedure and want to conserve on the number of times that I can legitimately activate this copy of Windows Vista before Microsoft locks it down and requires me to call in and manually request a new product key.
Figure D:

At this point in the installation, you’re prompted to type in your product key for activation.
Now, if you just want to temporarily install Vista in a dual-boot configuration while you experiment, but plan on installing it as your main operating system once you’re satisfied with the way that Vista behaves with your hardware and software, you too may want to disable the automatic activation routine. Even though you’ve disabled the automatic activation routine, you can still install Windows Vista and use it as you normally would for 30 days.
If you want to keep Vista in a dual-boot configuration, you can activate your license online anytime you want. If you decide to make Vista your main operating system, you can repartition your hard disk, reinstall Vista on the main partition and activate the new installation in the process.
If you decide to disable the automatic activation routine, you’ll see a confirmation dialog box, as shown in Figure E, which contains a harsh warning and prompts you to reconsider. You can just click No to continue.
Figure E:

Even though this dialog box contains a harsh warning, Microsoft wouldn’t have made automatic activation a choice if opting out was really dangerous.
Because, I didn’t enter in a product key, Setup doesn’t know what edition I’ve purchased and prompts me to select one of the seven editions on this disk, as shown in Figure F. Since, I'm working with the Ultimate edition, I selected that edition, checked the box, and clicked Next.
Figure F:

When you don’t enter a product key, Setup doesn’t know what edition you have a license for and so prompts you to select one of the seven editions
On the next page (Figure G), you’ll see the Microsoft Software License Terms and are prompted to read through them. However, unless you’re very curious you can just select the I Accept The License Terms check box and click Next.
Figure G:

Unless you’re very curious, you can just click through the license terms screen.
If you’re booting from the DVD, when you get to the Which Type Of Installation Do You Want page, the only option is Custom (advanced) as shown in Figure H. To move on, just click the Custom icon.
Figure H:

When you boot from the Windows Vista DVD, the only installation type that is available is the Custom (advanced).
When you arrive at the Where Do You Want To Install Windows? page, you’ll see your second partition or second drive. I created a second partition on which to install Windows Vista, so my page looked like the one in Figure I.
Figure I:

I created a second partition on a 160 GB hard disk on which to install Windows Vista.
Once the select a partition or disk and click Next, the rest of the installation will continue as it normally would. As such, I won’t follow the installation procedure any further in this article.
Windows Boot Manager
Once the installation is complete, you'll see the Windows Boot Manager screen, as shown in Figure K. As you can see, booting either Windows XP (listed as an Earlier Version of Windows) or Windows Vista is a simple menu choice. This menu will appear on the screen for 30 seconds before Windows Boot Manager launches the default operating system, which is Windows Vista.
Figure J:

The Windows Boot Manager allows you to select which operating system you want to boot.
The Activation countdown
Since I described installing Windows Vista without activating it for testing purposes, I wanted to point out that the Windows Vista will indeed keep track of your 30 day trial on the System screen, as shown in Figure K. In addition, it will regularly display
Figure K:

If you decide not to activate during your dual-boot installation, you can keep track of how many days you have until you must activate on the System page.
Configuring Windows Boot Manager
As I mentioned, the Windows Boot Manager menu will appear on the screen for 30 seconds before Windows Boot Manager launches the default operating system -- Windows Vista. However, if you wish to adjust the countdown or change the default operating system, you can do so from within Windows Vista.
Once you've booted into Windows Vista, press [Windows]+[Break] to access the System page. Next, click the Advance System Setting link in the Tasks pane and confirm though the UAC prompt. When you see the System Properties dialog box, click Settings in the Startup and Recovery panel. You’ll then see the Startup and Recovery dialog box, as shown in Figure L.
Figure L:

You can use the controls in the Startup and Recovery dialog box change the default operating system and the number of seconds that the Windows Boot Manager menu will appear on the screen.
In the System Startup pane, you can change the Default Operating System setting from the drop down list as well as use the spin buttons to adjust, up or down, the number of seconds to display the menu before launching the default operating system.
Conclusion
Installing Windows Vista in a dual-boot configuration along side Windows XP is a great way to experiment with the new operating system until you get comfortable with it. In this article, I’ve shown you how to how to create a Windows Vista dual-boot configuration.
If so, then you may be the perfect candidate for a dual-boot configuration. With this type of configuration, you can easily experiment with Windows Vista and still use Windows XP. In other words, you get to have your cake and eat it too.
In this article, I'll discuss some of the options you'll need to consider as you begin thinking about and planning for adding Windows Vista to your existing system in a dual-boot configuration. I’ll then walk you step by step through the entire procedure.
The location options
In order to install Windows Vista in a dual-boot configuration along with Windows XP, you need to have either a second partition on your existing hard disk or a second hard disk in your system. To give yourself enough room to experiment, you should have at least 20 GB and preferably 40 GB of space available on either the second partition or on the second hard disk.
If you don't have enough available space on your existing hard disk for a second partition, then you'll need to connect a second hard disk to your system. If you do have enough available space on your exiting hard disk for a second partition, then you'll need to obtain a partitioning software package. I recommend, Symantec’s Norton PartitionMagic only because I’ve used PartitionMagic for years. However, there are other partitioning software packages that I’ve heard are just as good, such as Acronis Disk Director or VCOM Partition Commander Professional.
Of course, detailed instructions on connecting a second hard disk or partitioning your existing hard disk are beyond the scope of this article. However, in either case, the second hard disk or the second partition must be formatted with NTFS before you begin the installation operation. If you add a second partition to your existing hard disk via a partitioning software package, you will be able to format it as NTFS at the same time as you create the partition. If you're installing a second hard disk, the easiest way to format it as NTFS is from within Windows XP’s Disk Manager, which you can quickly access by pressing [Windows]+R to access the Run dialog box and typing diskmgmt.msc in the Open text box.
The installation options
You can approach the dual-boot installation operation in one of two ways -- by cold booting from the Windows Vista DVD or by inserting the Windows Vista DVD while Windows XP is running. As you can imagine, you'll encounter slightly different introductory screens depending on which approach you use, but once you get stared the operation is essentially the same.
While both methods will produce the same result, I prefer the cold booting from the DVD method. The main reason is that you don't have to worry about any interference from antivirus/antispyware/firewall software on your existing Windows XP installation.
Performing the installation
Once you have your second partition or second hard disk operational, just insert your Windows Vista DVD, restart the system, and boot from the DVD. Once the system boots from the DVD, Windows Vista’s Setup will begin loading and will display the screen shown in Figure A.
Figure A:

Windows Vista’s Setup will take a few moments to load files before the installation actually commences.
In a few moments, you’ll see the screen that prompts you to choose the regional and language options, as shown in Figure B. As you can see, the default settings are for U.S. and English and if that’s you, you can just click Next to move on.
Figure B:

The default settings on the regional and language screen are for the U.S. and English.
On the next screen, you’ll be prompted to begin the installation procedure, as shown in Figure C. To begin, just click the Install Now button
Figure C:

To get started, click the Install Now button.
In the next screen, you’ll be prompted to type in your product key for activation, as shown in Figure D. By default, the Automatically Activate Windows When I’m online check box is selected; however, you’ll notice that I’ve cleared it. The main reason that I’ve done so here is that while writing this article, I’ve experimented over and over with this installation procedure and want to conserve on the number of times that I can legitimately activate this copy of Windows Vista before Microsoft locks it down and requires me to call in and manually request a new product key.
Figure D:

At this point in the installation, you’re prompted to type in your product key for activation.
Now, if you just want to temporarily install Vista in a dual-boot configuration while you experiment, but plan on installing it as your main operating system once you’re satisfied with the way that Vista behaves with your hardware and software, you too may want to disable the automatic activation routine. Even though you’ve disabled the automatic activation routine, you can still install Windows Vista and use it as you normally would for 30 days.
If you want to keep Vista in a dual-boot configuration, you can activate your license online anytime you want. If you decide to make Vista your main operating system, you can repartition your hard disk, reinstall Vista on the main partition and activate the new installation in the process.
If you decide to disable the automatic activation routine, you’ll see a confirmation dialog box, as shown in Figure E, which contains a harsh warning and prompts you to reconsider. You can just click No to continue.
Figure E:

Even though this dialog box contains a harsh warning, Microsoft wouldn’t have made automatic activation a choice if opting out was really dangerous.
Because, I didn’t enter in a product key, Setup doesn’t know what edition I’ve purchased and prompts me to select one of the seven editions on this disk, as shown in Figure F. Since, I'm working with the Ultimate edition, I selected that edition, checked the box, and clicked Next.
Figure F:

When you don’t enter a product key, Setup doesn’t know what edition you have a license for and so prompts you to select one of the seven editions
On the next page (Figure G), you’ll see the Microsoft Software License Terms and are prompted to read through them. However, unless you’re very curious you can just select the I Accept The License Terms check box and click Next.
Figure G:

Unless you’re very curious, you can just click through the license terms screen.
If you’re booting from the DVD, when you get to the Which Type Of Installation Do You Want page, the only option is Custom (advanced) as shown in Figure H. To move on, just click the Custom icon.
Figure H:

When you boot from the Windows Vista DVD, the only installation type that is available is the Custom (advanced).
When you arrive at the Where Do You Want To Install Windows? page, you’ll see your second partition or second drive. I created a second partition on which to install Windows Vista, so my page looked like the one in Figure I.
Figure I:

I created a second partition on a 160 GB hard disk on which to install Windows Vista.
Once the select a partition or disk and click Next, the rest of the installation will continue as it normally would. As such, I won’t follow the installation procedure any further in this article.
Windows Boot Manager
Once the installation is complete, you'll see the Windows Boot Manager screen, as shown in Figure K. As you can see, booting either Windows XP (listed as an Earlier Version of Windows) or Windows Vista is a simple menu choice. This menu will appear on the screen for 30 seconds before Windows Boot Manager launches the default operating system, which is Windows Vista.
Figure J:

The Windows Boot Manager allows you to select which operating system you want to boot.
The Activation countdown
Since I described installing Windows Vista without activating it for testing purposes, I wanted to point out that the Windows Vista will indeed keep track of your 30 day trial on the System screen, as shown in Figure K. In addition, it will regularly display
Figure K:

If you decide not to activate during your dual-boot installation, you can keep track of how many days you have until you must activate on the System page.
Configuring Windows Boot Manager
As I mentioned, the Windows Boot Manager menu will appear on the screen for 30 seconds before Windows Boot Manager launches the default operating system -- Windows Vista. However, if you wish to adjust the countdown or change the default operating system, you can do so from within Windows Vista.
Once you've booted into Windows Vista, press [Windows]+[Break] to access the System page. Next, click the Advance System Setting link in the Tasks pane and confirm though the UAC prompt. When you see the System Properties dialog box, click Settings in the Startup and Recovery panel. You’ll then see the Startup and Recovery dialog box, as shown in Figure L.
Figure L:

You can use the controls in the Startup and Recovery dialog box change the default operating system and the number of seconds that the Windows Boot Manager menu will appear on the screen.
In the System Startup pane, you can change the Default Operating System setting from the drop down list as well as use the spin buttons to adjust, up or down, the number of seconds to display the menu before launching the default operating system.
Conclusion
Installing Windows Vista in a dual-boot configuration along side Windows XP is a great way to experiment with the new operating system until you get comfortable with it. In this article, I’ve shown you how to how to create a Windows Vista dual-boot configuration.
Wednesday, December 26, 2007
10 ways to work better with your boss
Bosses: You can’t live with them, and you can’t live without them. Like it or not, most of us must deal with a boss, and the way we do so affects not just our career advancement and our salary, but also our mental well-being. Here are some tips on how to get along better with your boss.
#1: Remember that your boss just might have useful insights
Think you have a clueless boss? Remember the words of Mark Twain, who once said that when he was 14, his father was so stupid it was unbearable. Then, he continued, when he became 21, he was amazed at how much his father had learned in just seven years. Your boss might be smarter than you think, and maybe later in your career, you will appreciate that fact. Regardless, a bad boss can still offer good advice.
I remember what a boss from years ago told me about the workplace. He said I should be aggressive and find out what people needed done rather than sit back and wait for assignments.
Think of it this way: You still can learn from a bad boss. Analyze why that boss is a bad boss and then resolve to avoid those things if you ever become a boss yourself. As the cynic reminds us, even a stopped clock is correct twice a day.
#2: Know your boss’ objectives
Software developers often concern themselves with “traceability.” The requirements for a software system must directly or indirectly be tied, or traced, to the objectives of the company. In theory, therefore, any requirement that lacks such traceability should be considered irrelevant and removed.
In the same way, try to see the bigger picture. You need to know what the boss expects of you (see the next tip). But at the same time, you need to understand how your job helps the boss. Make sure that what you’re doing not only meets your own job description but helps the boss achieve his or her own objectives.
#3: Know what your boss expects of you
When I was young, I once complained to my mother that I had nothing to do. “Calvin,” she answered, “Why don’t you practice piano?” That was the last time I ever complained to her about that topic.
Ignorance of your parents’ wishes may be fine when you’re a child, but ignorance (willful or otherwise) of your boss’s expectations can kill your career. How can you expect a good performance evaluation if you’re unaware of how you’re going to be measured? If you know your objectives, are they quantifiable? If so, both of you will have an easier time during your evaluation.
Every once in a while, check with your boss about what you’re doing and what you’ve accomplished and make sure your boss has that same understanding. If your boss has issues with your performance, it’s better for both of you that you know sooner rather than later, so you have time to make adjustments.
In a perfect world, no surprises should arise during your performance review. If they do, either your boss didn’t communicate the objectives or you failed to understand them. Don’t let that happen to you.
#4: Be low maintenance
Don’t be the “problem employee,” the one the boss always has to check up and follow up on. Instead, try to be the one the boss can depend on. It might not be apparent immediately, but a good boss will recognize and appreciate that trait.
Are you going to be perfect in your work? Of course not. You’re probably going to make a mistake or create a problem at least once. However, when that happens, and you go to your boss (as you should, as mentioned below), try to go not just with the report of the problem. Think of some solutions and be prepared to offer your recommendations to your boss.
#5: Don’t surprise your boss
Don’t let your boss be blindsided by bad news. In other words, “fee up” if you created a problem or made a mistake. It’s better that bad news about you should come from you — not from a customer, not from a co-worker, and absolutely not from your boss’s boss. Did you have a negative interaction with an abusive caller or customer? As soon as the call is finished, call your boss and give a briefing. Tell the boss who you spoke with, why that person is upset, and what the boss can expect to hear from that person. Also give your side of the story.
The same advice applies to good news as well. Let your boss know about your successes. Otherwise, your boss might give the impression of being unaware of them when his or her own boss offers congratulations.
#6: Acknowledge your boss in your successes
The moment has arrived: You’re in front of your group, receiving an award or other recognition from your boss or your boss’ boss. An appropriate thing to do at this point is to recognize the people who made it possible, in particular your boss. It’s easy to do if your boss really did help you. What about the “difficult” boss, though? You should try to say something, but at the same time you probably should be truthful as well.
Remember what we discussed above — that even a bad boss can provide good insights and examples. Did your boss discourage you or make things difficult? Maybe, in that case, you could thank your boss for helping you “keep things in perspective” or for “serving as a sanity check” or for helping you “see the problem from multiple points of view.” Don’t push things, or you may start sounding cute and insincere. However, do try to say something about your boss’ help.
#7: Don’t take criticism personally
Because most of us are so involved with our work, it’s hard to separate ourselves from it. So when someone criticizes our work, we view that criticism as a personal attack. Reacting that way can hinder our development and our progress. The next time your boss (or anyone else) criticizes your work, try pretending that the work was done by someone else. Then, examine it as a third party would and test the validity of the criticism.
A smart boss realizes that your success is tied to his or her own success. Therefore, the boss has an interest in your doing well. Furthermore, criticism from the boss could be a sign that the boss has high expectations from you. When I first began working, I was upset because my boss had given me a task that I thought was too hard. I discussed my concern with a friend of my father, who worked in the same area as I did. Though it happened years ago, I still remember that friend’s advice. “Calvin,” he said, “[name of boss] gave you that task because he thinks you can do a good job.”
#8: Remember your boss has a boss
We discussed earlier the importance of knowing your boss’ objectives. In the same vein, be aware that your boss has a boss as well. You can use that fact to build a collaborative relationship with your own boss, because both of you have a common objective of making the boss’ boss happy and making your boss look good. Having that collaborative relationship gives your boss a better impression of you and gives you visibility to your boss’ boss.
#9: Don’t upstage your boss
Upstaging your boss can limit your career mobility. Therefore, be careful of correcting your boss in public, as someone did to my father once. While he was making a group presentation, he referred to Worcester Polytechnic Institute. In doing so, he correctly pronounced it as “Woo-ster.” This person spoke up, saying, “Wellington, you’re wrong. It’s ‘Woo-ches-ter.’” Fortunately, my father was smart, deflecting the comment with the following answer: “I’m sorry. Please forgive me. English is only my fifth language.” My father humorously defused the situation. However, the fact that after all these years I still hear this story tells you what my father thought of that correction and the person who made it.
There’s one instance when it’s okay to correct your boss in public: when your boss mistakenly thinks he or she made a mistake but really didn’t. Suppose your boss quotes a figure while giving a presentation. He or she then stops and says, “I’m sorry, I think I made a mistake.” If you know the boss was originally correct, it’s fine at that point to interrupt and say, “No, [boss’ name], you’re correct.”
#10: Manage your boss when necessary
Getting ahead in your career requires more than just sitting back and waiting for assignments. You must take initiative, looking for opportunities and problems to be solved. In doing so, take advantage of any organizational power your boss might have. Explain to your boss your plans and why they represent a good business decision. Then, ask your boss to fight any bureaucratic battles that may arise and to run interference for you. In doing so, you recognize the boss is the boss. However, you are directing your boss, in taking advantage of pull that you possibly lack.
#1: Remember that your boss just might have useful insights
Think you have a clueless boss? Remember the words of Mark Twain, who once said that when he was 14, his father was so stupid it was unbearable. Then, he continued, when he became 21, he was amazed at how much his father had learned in just seven years. Your boss might be smarter than you think, and maybe later in your career, you will appreciate that fact. Regardless, a bad boss can still offer good advice.
I remember what a boss from years ago told me about the workplace. He said I should be aggressive and find out what people needed done rather than sit back and wait for assignments.
Think of it this way: You still can learn from a bad boss. Analyze why that boss is a bad boss and then resolve to avoid those things if you ever become a boss yourself. As the cynic reminds us, even a stopped clock is correct twice a day.
#2: Know your boss’ objectives
Software developers often concern themselves with “traceability.” The requirements for a software system must directly or indirectly be tied, or traced, to the objectives of the company. In theory, therefore, any requirement that lacks such traceability should be considered irrelevant and removed.
In the same way, try to see the bigger picture. You need to know what the boss expects of you (see the next tip). But at the same time, you need to understand how your job helps the boss. Make sure that what you’re doing not only meets your own job description but helps the boss achieve his or her own objectives.
#3: Know what your boss expects of you
When I was young, I once complained to my mother that I had nothing to do. “Calvin,” she answered, “Why don’t you practice piano?” That was the last time I ever complained to her about that topic.
Ignorance of your parents’ wishes may be fine when you’re a child, but ignorance (willful or otherwise) of your boss’s expectations can kill your career. How can you expect a good performance evaluation if you’re unaware of how you’re going to be measured? If you know your objectives, are they quantifiable? If so, both of you will have an easier time during your evaluation.
Every once in a while, check with your boss about what you’re doing and what you’ve accomplished and make sure your boss has that same understanding. If your boss has issues with your performance, it’s better for both of you that you know sooner rather than later, so you have time to make adjustments.
In a perfect world, no surprises should arise during your performance review. If they do, either your boss didn’t communicate the objectives or you failed to understand them. Don’t let that happen to you.
#4: Be low maintenance
Don’t be the “problem employee,” the one the boss always has to check up and follow up on. Instead, try to be the one the boss can depend on. It might not be apparent immediately, but a good boss will recognize and appreciate that trait.
Are you going to be perfect in your work? Of course not. You’re probably going to make a mistake or create a problem at least once. However, when that happens, and you go to your boss (as you should, as mentioned below), try to go not just with the report of the problem. Think of some solutions and be prepared to offer your recommendations to your boss.
#5: Don’t surprise your boss
Don’t let your boss be blindsided by bad news. In other words, “fee up” if you created a problem or made a mistake. It’s better that bad news about you should come from you — not from a customer, not from a co-worker, and absolutely not from your boss’s boss. Did you have a negative interaction with an abusive caller or customer? As soon as the call is finished, call your boss and give a briefing. Tell the boss who you spoke with, why that person is upset, and what the boss can expect to hear from that person. Also give your side of the story.
The same advice applies to good news as well. Let your boss know about your successes. Otherwise, your boss might give the impression of being unaware of them when his or her own boss offers congratulations.
#6: Acknowledge your boss in your successes
The moment has arrived: You’re in front of your group, receiving an award or other recognition from your boss or your boss’ boss. An appropriate thing to do at this point is to recognize the people who made it possible, in particular your boss. It’s easy to do if your boss really did help you. What about the “difficult” boss, though? You should try to say something, but at the same time you probably should be truthful as well.
Remember what we discussed above — that even a bad boss can provide good insights and examples. Did your boss discourage you or make things difficult? Maybe, in that case, you could thank your boss for helping you “keep things in perspective” or for “serving as a sanity check” or for helping you “see the problem from multiple points of view.” Don’t push things, or you may start sounding cute and insincere. However, do try to say something about your boss’ help.
#7: Don’t take criticism personally
Because most of us are so involved with our work, it’s hard to separate ourselves from it. So when someone criticizes our work, we view that criticism as a personal attack. Reacting that way can hinder our development and our progress. The next time your boss (or anyone else) criticizes your work, try pretending that the work was done by someone else. Then, examine it as a third party would and test the validity of the criticism.
A smart boss realizes that your success is tied to his or her own success. Therefore, the boss has an interest in your doing well. Furthermore, criticism from the boss could be a sign that the boss has high expectations from you. When I first began working, I was upset because my boss had given me a task that I thought was too hard. I discussed my concern with a friend of my father, who worked in the same area as I did. Though it happened years ago, I still remember that friend’s advice. “Calvin,” he said, “[name of boss] gave you that task because he thinks you can do a good job.”
#8: Remember your boss has a boss
We discussed earlier the importance of knowing your boss’ objectives. In the same vein, be aware that your boss has a boss as well. You can use that fact to build a collaborative relationship with your own boss, because both of you have a common objective of making the boss’ boss happy and making your boss look good. Having that collaborative relationship gives your boss a better impression of you and gives you visibility to your boss’ boss.
#9: Don’t upstage your boss
Upstaging your boss can limit your career mobility. Therefore, be careful of correcting your boss in public, as someone did to my father once. While he was making a group presentation, he referred to Worcester Polytechnic Institute. In doing so, he correctly pronounced it as “Woo-ster.” This person spoke up, saying, “Wellington, you’re wrong. It’s ‘Woo-ches-ter.’” Fortunately, my father was smart, deflecting the comment with the following answer: “I’m sorry. Please forgive me. English is only my fifth language.” My father humorously defused the situation. However, the fact that after all these years I still hear this story tells you what my father thought of that correction and the person who made it.
There’s one instance when it’s okay to correct your boss in public: when your boss mistakenly thinks he or she made a mistake but really didn’t. Suppose your boss quotes a figure while giving a presentation. He or she then stops and says, “I’m sorry, I think I made a mistake.” If you know the boss was originally correct, it’s fine at that point to interrupt and say, “No, [boss’ name], you’re correct.”
#10: Manage your boss when necessary
Getting ahead in your career requires more than just sitting back and waiting for assignments. You must take initiative, looking for opportunities and problems to be solved. In doing so, take advantage of any organizational power your boss might have. Explain to your boss your plans and why they represent a good business decision. Then, ask your boss to fight any bureaucratic battles that may arise and to run interference for you. In doing so, you recognize the boss is the boss. However, you are directing your boss, in taking advantage of pull that you possibly lack.
10 things to look for in an offsite backup provider
Automated offsite backup services are all the rage. Remote Data Backups, and Online Backup are among some of the best-known contenders.
Unlike online storage services, offsite backup providers offer not only gigabytes of offsite file storage but also automated backup software designed to automatically back up the data you specify. That’s a critical difference that should be noted: Online storage services don’t provide automated backup functionality. Sure, online storage services are cheaper. But they’re useless in protecting your data if you forget to manually back up files every day as they change or as new files are created.
Unfortunately, not all offsite backup services are created equal. Some of the services work better than others, and pricing varies, as does the quality of the automated backup software. Here are some things to keep in mind as you evaluate offsite backup providers.
#1: Reliable software
Backup firms, like any other service provider, will promise the world. But actually delivering on all the promises (simple backup configuration, HIPAA-compliant security, easy recovery, seamless integration in Windows, etc.) is another matter altogether.
I’ve sampled and deployed automated backup services from a number of providers. Some that propose to provide easy 1-2-3 backup operations fail to run, prove incompatible on server platforms, or generate cryptic errors.
Backups are too important to trust to chance. Make sure that the backup software you deploy works well on the OS platforms you require. Many automated offsite backup services run best on Windows XP, while others perform well on Windows Vista and Windows server OSes. The only way to really know is to test a service’s application before rolling it out on production systems. That’s why item #8 (free trials) is so important, but more on that in a moment.
#2: Storage plans that meet your needs
Some offsite backup services bill by the gigabyte. That’s fine. There’s no trouble there, other than the fact that the fee structure makes budgeting backup costs more difficult.
Other service providers, though, sell accounts with specific storage limits (100MB, 4GB, 10GB, etc.) and flat fees. Those plans work well and simplify budgeting, at least until organizations unexpectedly exceed their storage limits.
Look for service providers with storage limits or pricing plans that meet your organization’s needs while also proving flexible. Remote Data Backups, for example, makes it easy (just a few clicks) to upgrade from a 4GB account to a 10GB plan (or from a 10GB to a 30GB account). Clients need only pay the difference between the two storage plans (not start from scratch).
#3: Stellar reporting tools
A leading benefit of automated backup services is peace of mind. Knowing critical data is automatically being backed up offsite is more than just a relief. With critical data safely secured, you can move on to addressing other tasks.
IT professionals, though, are typically (and rightfully so) a skeptical crowd. So they want, or require, more than just a promise that critical data is being backed up; they need confirmation.
Only with detailed and accurate backup reporting (Figure A) can you be sure that systems and data are being properly backed up. Insist on file-level reporting with any backup service provider. In addition to a daily list of every file that’s backed up, look for reporting tools that list file sizes, time of transfer, and any error details.
Figure A

Remote Data Backups creates log files that track numerous details about each file that’s backed up.
#4: An approachable backup application
The backup application itself must be easy to use and as close to foolproof as possible. Many leverage Windows Explorer-like interfaces (Figure B), where you just need to check boxes for those files and folders that require backing up.
Take advantage of a trial period. Work first hand with the software. Confirm the service’s backup application and interface are sufficiently simple to avoid confusion but flexible enough to meet the organization’s needs.
Figure B

The Mozy Backup tool features a simple Explorer-like interface for specifying which files/folders should be backed up.
In most cases, backup software isn’t Microsoft Exchange aware (or can’t properly back up active databases). In such circumstances, confirm that you can automate an Exchange or database backup (using Windows’ built-in or another locally installed backup program) and have the alternative backup program park copies of the backups it creates in folders the backup provider’s software can accommodate. Better yet, seek backup applications that can manage active database and e-mail systems’ data (but be prepared to pay handsomely for the privilege — I’ve yet to find one that justifies the cost).
#5: Simple recovery
When hard disks fail, users accidentally delete files, or other systems errors occur, IT professionals need to be able to recover files quickly. Conduct tests of backup providers’ recovery functions to confirm that file recovery is simple, fast, and secure.
In other words, make sure it’s easy for you to recover data that’s been backed up offsite but that unauthorized parties won’t be able to do the same.
#6: Secure file transfer
Security has always been an issue with backups. Whether strategies involved giving one set of IT pros backup rights and another set restoration privileges, organizations have always struggled for a reasonable balance between security and operational efficiency when addressing backup issues.
Security remains a concern when selecting an automated offsite backup provider. Insist on deploying a service that meets HIPAA and SOX/Accounting security requirements. Most backup providers support at least 128-bit AES encryption and SSL security. Don’t work with a provider offering anything less.
Further, when creating automated offsite backup accounts, protect the account information (and recovery hashes or passwords) carefully. Distribute such keys sparingly and change them whenever technology employees leave the organization.
#7: 24/7 support
Disk failures and other data loss episodes don’t always occur during office hours, and they almost always require repair and recovery operations after hours (to minimize disruption to other users). Thus, you should confirm that your backup service provider’s technicians will be available when you need them most. Many backup providers boast 24/7 support. Before signing any contracts or purchasing service, make sure you’ll be able to reach its support personnel during odd hours should troubleshooting assistance ever be required.
#8: Free trials
The best way to determine whether an offsite backup provider works well for your organization is to sample its wares. Not only should you test the backup software application, support procedures, and reporting tools, but you should conduct a test recovery as well.
Only by walking through the process (creating an account, installing the backup client application, running backup operations, contacting technical support, reviewing report files, and performing a data restore) can you accurately determine whether a backup service provider offers an approachable backup program, quality support, and reliable reporting and recovery processes. Also, potential incompatibilities (between data files, databases, Windows, and the actual backup software itself) are too numerous to ever reasonably forecast, so the process of testing online backup tools on systems with similar configurations to those running in production environments will help eliminate any surprises and potential downtime when the time for real-world deployment arrives.
#9: Version tracking
Several backup providers support the ability to maintain multiple file versions. The ability to go back and reference several versions of a particular file can prove quite valuable.
When simple backup operations run, files from the previous backup (such as those backed up the night before) are written over. Most organizations back up data daily (at night). With such backup schedules, little time exists to discover errors (such as an accountant realizing he or she entered incorrect data in a budget file). If such errors aren’t caught within a day, of course, the budget file with the correct data will be written over by the file containing errors that night. With versioning file systems, several versions (or historical copies) of the same file can be maintained to recover from just such mistakes.
Look for this feature. It can bail out harried users who mistakenly corrupt good data.
#10: E-mail alerts
Numerous distractions demand IT professionals’ attention. Whether failed routers, nonfunctioning remote connections, new user accounts, or other common break/fix issues arrest your workday, backup operations must still be monitored. Unfortunately, in the heat of putting out fires and attending other crises, it’s easy to overlook backup issues until it’s too late.
Some offsite backup providers support sending alerts, bringing your attention to problems via e-mail. Without this feature, you might remain unaware that backups are failing or larger issues exist. By insisting on selecting a backup provider that supports forwarding e-mail alerts when backups fail or encounter errors, organizations can ensure their IT staff stays on top of backup operations and receive SOS messages when troubles do arise.
Unlike online storage services, offsite backup providers offer not only gigabytes of offsite file storage but also automated backup software designed to automatically back up the data you specify. That’s a critical difference that should be noted: Online storage services don’t provide automated backup functionality. Sure, online storage services are cheaper. But they’re useless in protecting your data if you forget to manually back up files every day as they change or as new files are created.
Unfortunately, not all offsite backup services are created equal. Some of the services work better than others, and pricing varies, as does the quality of the automated backup software. Here are some things to keep in mind as you evaluate offsite backup providers.
#1: Reliable software
Backup firms, like any other service provider, will promise the world. But actually delivering on all the promises (simple backup configuration, HIPAA-compliant security, easy recovery, seamless integration in Windows, etc.) is another matter altogether.
I’ve sampled and deployed automated backup services from a number of providers. Some that propose to provide easy 1-2-3 backup operations fail to run, prove incompatible on server platforms, or generate cryptic errors.
Backups are too important to trust to chance. Make sure that the backup software you deploy works well on the OS platforms you require. Many automated offsite backup services run best on Windows XP, while others perform well on Windows Vista and Windows server OSes. The only way to really know is to test a service’s application before rolling it out on production systems. That’s why item #8 (free trials) is so important, but more on that in a moment.
#2: Storage plans that meet your needs
Some offsite backup services bill by the gigabyte. That’s fine. There’s no trouble there, other than the fact that the fee structure makes budgeting backup costs more difficult.
Other service providers, though, sell accounts with specific storage limits (100MB, 4GB, 10GB, etc.) and flat fees. Those plans work well and simplify budgeting, at least until organizations unexpectedly exceed their storage limits.
Look for service providers with storage limits or pricing plans that meet your organization’s needs while also proving flexible. Remote Data Backups, for example, makes it easy (just a few clicks) to upgrade from a 4GB account to a 10GB plan (or from a 10GB to a 30GB account). Clients need only pay the difference between the two storage plans (not start from scratch).
#3: Stellar reporting tools
A leading benefit of automated backup services is peace of mind. Knowing critical data is automatically being backed up offsite is more than just a relief. With critical data safely secured, you can move on to addressing other tasks.
IT professionals, though, are typically (and rightfully so) a skeptical crowd. So they want, or require, more than just a promise that critical data is being backed up; they need confirmation.
Only with detailed and accurate backup reporting (Figure A) can you be sure that systems and data are being properly backed up. Insist on file-level reporting with any backup service provider. In addition to a daily list of every file that’s backed up, look for reporting tools that list file sizes, time of transfer, and any error details.
Figure A

Remote Data Backups creates log files that track numerous details about each file that’s backed up.
#4: An approachable backup application
The backup application itself must be easy to use and as close to foolproof as possible. Many leverage Windows Explorer-like interfaces (Figure B), where you just need to check boxes for those files and folders that require backing up.
Take advantage of a trial period. Work first hand with the software. Confirm the service’s backup application and interface are sufficiently simple to avoid confusion but flexible enough to meet the organization’s needs.
Figure B

The Mozy Backup tool features a simple Explorer-like interface for specifying which files/folders should be backed up.
In most cases, backup software isn’t Microsoft Exchange aware (or can’t properly back up active databases). In such circumstances, confirm that you can automate an Exchange or database backup (using Windows’ built-in or another locally installed backup program) and have the alternative backup program park copies of the backups it creates in folders the backup provider’s software can accommodate. Better yet, seek backup applications that can manage active database and e-mail systems’ data (but be prepared to pay handsomely for the privilege — I’ve yet to find one that justifies the cost).
#5: Simple recovery
When hard disks fail, users accidentally delete files, or other systems errors occur, IT professionals need to be able to recover files quickly. Conduct tests of backup providers’ recovery functions to confirm that file recovery is simple, fast, and secure.
In other words, make sure it’s easy for you to recover data that’s been backed up offsite but that unauthorized parties won’t be able to do the same.
#6: Secure file transfer
Security has always been an issue with backups. Whether strategies involved giving one set of IT pros backup rights and another set restoration privileges, organizations have always struggled for a reasonable balance between security and operational efficiency when addressing backup issues.
Security remains a concern when selecting an automated offsite backup provider. Insist on deploying a service that meets HIPAA and SOX/Accounting security requirements. Most backup providers support at least 128-bit AES encryption and SSL security. Don’t work with a provider offering anything less.
Further, when creating automated offsite backup accounts, protect the account information (and recovery hashes or passwords) carefully. Distribute such keys sparingly and change them whenever technology employees leave the organization.
#7: 24/7 support
Disk failures and other data loss episodes don’t always occur during office hours, and they almost always require repair and recovery operations after hours (to minimize disruption to other users). Thus, you should confirm that your backup service provider’s technicians will be available when you need them most. Many backup providers boast 24/7 support. Before signing any contracts or purchasing service, make sure you’ll be able to reach its support personnel during odd hours should troubleshooting assistance ever be required.
#8: Free trials
The best way to determine whether an offsite backup provider works well for your organization is to sample its wares. Not only should you test the backup software application, support procedures, and reporting tools, but you should conduct a test recovery as well.
Only by walking through the process (creating an account, installing the backup client application, running backup operations, contacting technical support, reviewing report files, and performing a data restore) can you accurately determine whether a backup service provider offers an approachable backup program, quality support, and reliable reporting and recovery processes. Also, potential incompatibilities (between data files, databases, Windows, and the actual backup software itself) are too numerous to ever reasonably forecast, so the process of testing online backup tools on systems with similar configurations to those running in production environments will help eliminate any surprises and potential downtime when the time for real-world deployment arrives.
#9: Version tracking
Several backup providers support the ability to maintain multiple file versions. The ability to go back and reference several versions of a particular file can prove quite valuable.
When simple backup operations run, files from the previous backup (such as those backed up the night before) are written over. Most organizations back up data daily (at night). With such backup schedules, little time exists to discover errors (such as an accountant realizing he or she entered incorrect data in a budget file). If such errors aren’t caught within a day, of course, the budget file with the correct data will be written over by the file containing errors that night. With versioning file systems, several versions (or historical copies) of the same file can be maintained to recover from just such mistakes.
Look for this feature. It can bail out harried users who mistakenly corrupt good data.
#10: E-mail alerts
Numerous distractions demand IT professionals’ attention. Whether failed routers, nonfunctioning remote connections, new user accounts, or other common break/fix issues arrest your workday, backup operations must still be monitored. Unfortunately, in the heat of putting out fires and attending other crises, it’s easy to overlook backup issues until it’s too late.
Some offsite backup providers support sending alerts, bringing your attention to problems via e-mail. Without this feature, you might remain unaware that backups are failing or larger issues exist. By insisting on selecting a backup provider that supports forwarding e-mail alerts when backups fail or encounter errors, organizations can ensure their IT staff stays on top of backup operations and receive SOS messages when troubles do arise.
Thursday, December 13, 2007
Address large-scale disasters with the Business Continuity Planning trinity
Takeaway: Large-scale disaster planning and recovery must address three areas: human resources, facilities management, and information technology. Joint planning is crucial for these groups to work together in a crisis.
How well can your organization deal with an emergency? Automatically sign up for our free Disaster Recovery newsletter, delivered each Tuesday, and make sure you're prepared for the next catastrophe.
In any column dealing with Business Continuity Planning (BCP) and Disaster Recovery (DR), there will no doubt come a time when the discussion must turn to large-scale disasters. There has been a great deal of press and awareness of man-made disasters, and lately there has been a true surge in coverage of natural disasters with hurricane after hurricane slamming into multiple cities again and again. Both types of disasters can and do cause massive loss of systems, even entire locations, not to mention the loss of life involved in the wake of these events. How will your organization handle this type of disaster?
No organization can claim readiness for large-scale disasters without addressing the trinity of BCP: Human Resources, Facilities Management, and Information Technology. This trio must work in concert to properly overcome a disaster's impact, so you will not be able to do this alone as an IT professional. It would seem that even with all three groups working together, you will still have an overwhelming task ahead of you, but if you break the tasks down into component parts, you can manage the event and maintain your business systems.
The first order of business is to get good information flowing in. In the wake of a major disaster--natural or man-made--you will no doubt find a wealth of information that you will need to sift through to verify what is real, versus what is either imagined or simply exaggerated. Case in point: After the initial shock of the power failures in the northeast United States in August 2003, many people were absolutely convinced it was a terrorist attack, when in fact it was simply a large-scale technology failure across several systems. Finding out what happened and what resources you still have available is a vital first step in the process of dealing with a disaster.
Your next priority is to get good information flowing out. Make sure everyone who needs to be in the loop during the initial recovery process is available, or that substitutes are brought in. It may sound easy on the surface, but remember that physical and mobile phone service may be interrupted, e-mail systems will probably be offline, and other communication systems may be acting erratically. Find the systems that are still working and get the word out as soon as possible.
Hopefully, you have already determined your Recovery Time Objectives (RTO) for your various systems before the disaster struck. If not, there is very little you can do but try to bring everything back up as soon as you can. If you do have RTO numbers, start working with the shortest recovery times and bring those systems up in alternate locations first, and leave all the other systems for later--no matter how much people start yelling at you to bring them up sooner.
At this point, you must concentrate your staff on the most important systems first, regardless of the apparent urgency that already panicked staffers may express to you regarding other systems that everyone agreed were less important prior to the actual disaster. Keep in mind that this may mean finding alternate data-center space and acquiring new hardware if you haven't already planned for these eventualities. This is where Facilities Management comes in to make sure you have a location to set all this up.
Finally, after all the urgent issues have been addressed, you can then begin to bring up other data-systems as time and equipment will allow. If you're in a smaller shop, HR, Facilities, and IT may all be the same person, making your job somewhat easier and harder at the same time, but all three groups must be brought into the equation.
Dealing with a large-scale disaster is something that everyone would prefer not to have to deal with. Recent events have proven that it is--unfortunately--an eventuality that no organization can afford to ignore.
How well can your organization deal with an emergency? Automatically sign up for our free Disaster Recovery newsletter, delivered each Tuesday, and make sure you're prepared for the next catastrophe.
In any column dealing with Business Continuity Planning (BCP) and Disaster Recovery (DR), there will no doubt come a time when the discussion must turn to large-scale disasters. There has been a great deal of press and awareness of man-made disasters, and lately there has been a true surge in coverage of natural disasters with hurricane after hurricane slamming into multiple cities again and again. Both types of disasters can and do cause massive loss of systems, even entire locations, not to mention the loss of life involved in the wake of these events. How will your organization handle this type of disaster?
No organization can claim readiness for large-scale disasters without addressing the trinity of BCP: Human Resources, Facilities Management, and Information Technology. This trio must work in concert to properly overcome a disaster's impact, so you will not be able to do this alone as an IT professional. It would seem that even with all three groups working together, you will still have an overwhelming task ahead of you, but if you break the tasks down into component parts, you can manage the event and maintain your business systems.
The first order of business is to get good information flowing in. In the wake of a major disaster--natural or man-made--you will no doubt find a wealth of information that you will need to sift through to verify what is real, versus what is either imagined or simply exaggerated. Case in point: After the initial shock of the power failures in the northeast United States in August 2003, many people were absolutely convinced it was a terrorist attack, when in fact it was simply a large-scale technology failure across several systems. Finding out what happened and what resources you still have available is a vital first step in the process of dealing with a disaster.
Your next priority is to get good information flowing out. Make sure everyone who needs to be in the loop during the initial recovery process is available, or that substitutes are brought in. It may sound easy on the surface, but remember that physical and mobile phone service may be interrupted, e-mail systems will probably be offline, and other communication systems may be acting erratically. Find the systems that are still working and get the word out as soon as possible.
Hopefully, you have already determined your Recovery Time Objectives (RTO) for your various systems before the disaster struck. If not, there is very little you can do but try to bring everything back up as soon as you can. If you do have RTO numbers, start working with the shortest recovery times and bring those systems up in alternate locations first, and leave all the other systems for later--no matter how much people start yelling at you to bring them up sooner.
At this point, you must concentrate your staff on the most important systems first, regardless of the apparent urgency that already panicked staffers may express to you regarding other systems that everyone agreed were less important prior to the actual disaster. Keep in mind that this may mean finding alternate data-center space and acquiring new hardware if you haven't already planned for these eventualities. This is where Facilities Management comes in to make sure you have a location to set all this up.
Finally, after all the urgent issues have been addressed, you can then begin to bring up other data-systems as time and equipment will allow. If you're in a smaller shop, HR, Facilities, and IT may all be the same person, making your job somewhat easier and harder at the same time, but all three groups must be brought into the equation.
Dealing with a large-scale disaster is something that everyone would prefer not to have to deal with. Recent events have proven that it is--unfortunately--an eventuality that no organization can afford to ignore.
Monday, December 10, 2007
10 pieces of hardware you should replace rather than repair
Any time a computer component stops working, or just becomes unstable — as we all know will happen from time to time — we have to decide whether to replace it, have it repaired, or just get by as is with perhaps a temporary fix. Repair or just getting by will nearly always be the cheapest solution, at least in the short run. Replacement, however, will usually provide a good opportunity to upgrade. In fact, given the rate at which the various technologies behind computer hardware are advancing, unless you replace something a week after you buy it, you may almost be forced to upgrade.Following are a few items which, if replaced (and generally upgraded), can provide excellent benefits, from an enhanced user experience to additional compatibility, greater longevity, and stability for the whole system.
#1: Power supply
One of the most overlooked pieces of computer hardware is the power supply unit (PSU). Computer enthusiasts often brag about their blazing fast processors, top-of-the- line video cards, and gigs upon gigs of RAM, but rarely about their great PSUs.
The truth is, the power supply is the last thing we should skimp on when choosing components for our system. If a computer’s brain is its processor, its heart is the power supply. And having one that is worn out, underpowered, unstable, or just generally cheap can be a major cause of hardware failure.
Every computer’s power requirements are different, but a good minimum for a modern PC is 450 watts. Some systems, especially those with multiple high-end video cards or lots of add-on cards and peripherals may require a PSU rated at 800 watts or more. Replacing a failing or inadequate power supply can make a previously unstable system stable.
Aside from supplying enough power, that power must be supplied stably. A common cause of “unexplained” lockups and system crashes is a drop in voltage supplied to the system when under load, caused by a poorly manufactured PSU. The easiest way to find a quality PSU is to stick to the consistently top brands such as Antec, EnerMax, and PC Power & Cooling.
#2: Fans
As computers have gotten more powerful over the last decades, they have also gotten hotter. Gone are the days of a passively cooled Pentium 100; now we have fans on our massive CPU heatsinks, on our monster video cards, and on intake and outtake vents to our computer cases. All of these fans are playing important roles by keeping our computers safely cooled, and we should try to ensure that they continue doing so.
Fans are one of the few parts that when replaced will not usually be replaced with something better. But they deserve mention because:
As one of the few moving parts in our system, they are one of the most likely to actually break.
When they break, it’s likely to pass unnoticed or not cause much concern.
Also, fans are cheap and easy to replace. It generally takes about 10 dollars, 15 minutes, and a screwdriver to install a new one, so there’s really no good excuse for not doing so.
#3: Surge protector / UPS
This is another item that keeps our computers safe and should not be neglected. A surge protector can be a stand-alone power strip, but one is also built into virtually every uninterruptible power supply (UPS). The surge protector guards our devices against spikes in energy that occur in our circuits at the home or office, usually due to lightning or the powering up of high-powered devices, such as hair dryers or refrigerators. Repairing a surge protector would be difficult and expensive at best; replacement is almost always the best option.
It can be tricky to know when it’s time to replace a surge protector, because the component inside that diverts excess power from surges to the ground simply wears out with repeated use. However, there is often no interruption of power or other indication that it’s done. You may still have juice but not be protected. The cheapest protectors may wear out after fewer than 10 small surges, while the better ones can last through hundreds. The safest thing to do is to get higher quality protectors but still replace them occasionally.
#4: Video card
The video card is one of the most important elements in the performance of your system and overall user experience. Even though it is also one of the priciest components, there are two good reasons to replace it should your old one bite the dust.
First, video cards are one of the components that are being improved upon seemingly every day. Just like with CPUs, a video card that’s two years old simply isn’t as fast as a current one and won’t have the newest features (such as support for DirectX 10).
Also, the video card is the number one hardware stopgap as we migrate to Vista. Manufacturers just aren’t providing new Vista-compatible drivers for lots of their old video cards. This means that many of us will have to replace our video cards whether they are broken or not, if we plan to switch to Vista.
#5: Flash media reader
All kinds of devices use flash cards these days: cameras, MP3 players, even cell phones. These small devices let us take our data anywhere easily. Since it seems as if every device uses a different format of flash media, most of us have all-in-one type card readers. If the reader breaks or gets lost (which seems to happen a lot), there are two excellent reasons for upgrading to a newer model instead of trying to repair the old one.
First, many old card readers are USB 1.1. The newer ones use USB 2.0 instead, which is 40 times faster. This is more than enough reason to replace an old reader, even if it’s not broken.
In addition, new formats are constantly coming out for flash cards, and when they do, you need a new reader to use them. For example, Secure Digital High Capacity (SDHC) and xD from Fujifilm are not supported by older readers.
#6: CD/DVD drives
Considering that it has moving, spinning parts, the average CD/DVD drive is actually fairly robust. Because of that, however, many people are still using old read-only (or CD RW) drives instead of amazingly cheap (and handy) DVD writers. If you’re still using an old drive and it finally gives up the ghost, you’ll probably be glad it did when you replace it with a DVD/CD RW combo drive for less than 50 dollars.
#7: Hard drives
The computer component we all least want to fail is the hard drive. It’s easier to cope with the loss of the much more expensive processor or video card as long as we still have our precious data, so your first instinct is to try to repair it. But if you’ve been practicing good backup habits, you can actually come out of the situation better off when you replace the old drive with something bigger and faster.
The “giant” 100-GB hard drive of a few years ago is no longer so large. Today, you can get 750 GB for less than 200 bucks. In addition to being much, much larger, newer hard drives will generally be Serial ATA II (SATA II), which has a maximum data transfer rate of about 300 MB/s as opposed to SATA I’s 150 MB/s and the older Parallel ATA (PATA) rate of 133 MB/s. SATA II is fairly new, so many motherboards don’t support it. But even if yours doesn’t, the SATA II drives generally have a jumper that can put them in SATA I mode.
TIP: Right now, most SATA II hard drives ship with this limiting jumper in place by default, so if your board does support SATA II, be sure to change the jumper before you install the drive.
#8: Monitor
With the exception of servers, a computer isn’t much good without a monitor. Monitors rarely make it all the way to the stage of completely not working, because we replace them when they start to fade. If you replace a monitor that’s more than a few years old, the new will likely not much resemble the old.
Any reluctance you may have had to switch from the giant 50-pound cathode ray tube (CRT) monitor to a slim and featherweight liquid crystal display (LCD) should be gone by now. The gap in performance in terms of color rendering and refresh rates between CRTs and LCDs is very small. Unless you’re a graphics designer who needs a multi-thousand dollar large screen CRT, the benefits of size, weight, power consumption, and less eye fatigue that LCDs enjoy will far outweigh any small performance advantages of a CRT. With the exception of the extremely high and extremely low end markets, it’s quite hard to find a new CRT monitor anyway.
If you were already using an LCD that’s a few years old, when you replace it you’ll enjoy those leaps in performance that the LCDs have made in the last few years.
#9: Keyboard
Since so many of us spend hours every day banging away at them, it’s important to have a keyboard that’s comfortable and efficient. And since we use them so much and often so brutally, it is no wonder that they break often. Keys come off, get stuck, or just get really dirty. When these things happen, you should usually go ahead and replace the keyboard rather than live with the hassle.
Today’s keyboards have new, handy features. Some have built in user-defined macro keys for often-repeated commands; some can fold up for easy transportability; some have built-in ports so they can double as USB hubs. There is a keyboard with some unique feature to suit nearly anyone’s needs.
#10: Motherboard and processor
Replacing the motherboard is always the most involved upgrade. Since it usually means “starting over” with a clean installation of the operating system, lots of people are reluctant to change to a newer board even when the old one gives up the ghost, preferring instead to replace it with the exact same model, thus avoiding having to wipe the OS. However, since a motherboard upgrade is the most involved, it also can give the widest range of benefits.
First and foremost, replacing the motherboard usually gives us the chance to upgrade to the latest processor technology. Today, you can get the benefits of a dual or even quad CPU setup with only one processor, thanks to multi-core technology, in which more than one processing core is placed on a single wafer. In a multitasking or multithreaded environment, this effectively increases your computer’s performance by a factor of two or four.
Additionally, upgrading the motherboard gives you access to new technologies for other components. PATA and SATA I hard drives (and optical drives) can be upgraded to SATA II. AGP video cards can be upgraded to PCI-E. USB 1.1 ports become USB 2.0. The list goes on for virtually every component. Sometimes, even though it can be a pain, starting over can be the best thing.
#1: Power supply
One of the most overlooked pieces of computer hardware is the power supply unit (PSU). Computer enthusiasts often brag about their blazing fast processors, top-of-the- line video cards, and gigs upon gigs of RAM, but rarely about their great PSUs.
The truth is, the power supply is the last thing we should skimp on when choosing components for our system. If a computer’s brain is its processor, its heart is the power supply. And having one that is worn out, underpowered, unstable, or just generally cheap can be a major cause of hardware failure.
Every computer’s power requirements are different, but a good minimum for a modern PC is 450 watts. Some systems, especially those with multiple high-end video cards or lots of add-on cards and peripherals may require a PSU rated at 800 watts or more. Replacing a failing or inadequate power supply can make a previously unstable system stable.
Aside from supplying enough power, that power must be supplied stably. A common cause of “unexplained” lockups and system crashes is a drop in voltage supplied to the system when under load, caused by a poorly manufactured PSU. The easiest way to find a quality PSU is to stick to the consistently top brands such as Antec, EnerMax, and PC Power & Cooling.
#2: Fans
As computers have gotten more powerful over the last decades, they have also gotten hotter. Gone are the days of a passively cooled Pentium 100; now we have fans on our massive CPU heatsinks, on our monster video cards, and on intake and outtake vents to our computer cases. All of these fans are playing important roles by keeping our computers safely cooled, and we should try to ensure that they continue doing so.
Fans are one of the few parts that when replaced will not usually be replaced with something better. But they deserve mention because:
As one of the few moving parts in our system, they are one of the most likely to actually break.
When they break, it’s likely to pass unnoticed or not cause much concern.
Also, fans are cheap and easy to replace. It generally takes about 10 dollars, 15 minutes, and a screwdriver to install a new one, so there’s really no good excuse for not doing so.
#3: Surge protector / UPS
This is another item that keeps our computers safe and should not be neglected. A surge protector can be a stand-alone power strip, but one is also built into virtually every uninterruptible power supply (UPS). The surge protector guards our devices against spikes in energy that occur in our circuits at the home or office, usually due to lightning or the powering up of high-powered devices, such as hair dryers or refrigerators. Repairing a surge protector would be difficult and expensive at best; replacement is almost always the best option.
It can be tricky to know when it’s time to replace a surge protector, because the component inside that diverts excess power from surges to the ground simply wears out with repeated use. However, there is often no interruption of power or other indication that it’s done. You may still have juice but not be protected. The cheapest protectors may wear out after fewer than 10 small surges, while the better ones can last through hundreds. The safest thing to do is to get higher quality protectors but still replace them occasionally.
#4: Video card
The video card is one of the most important elements in the performance of your system and overall user experience. Even though it is also one of the priciest components, there are two good reasons to replace it should your old one bite the dust.
First, video cards are one of the components that are being improved upon seemingly every day. Just like with CPUs, a video card that’s two years old simply isn’t as fast as a current one and won’t have the newest features (such as support for DirectX 10).
Also, the video card is the number one hardware stopgap as we migrate to Vista. Manufacturers just aren’t providing new Vista-compatible drivers for lots of their old video cards. This means that many of us will have to replace our video cards whether they are broken or not, if we plan to switch to Vista.
#5: Flash media reader
All kinds of devices use flash cards these days: cameras, MP3 players, even cell phones. These small devices let us take our data anywhere easily. Since it seems as if every device uses a different format of flash media, most of us have all-in-one type card readers. If the reader breaks or gets lost (which seems to happen a lot), there are two excellent reasons for upgrading to a newer model instead of trying to repair the old one.
First, many old card readers are USB 1.1. The newer ones use USB 2.0 instead, which is 40 times faster. This is more than enough reason to replace an old reader, even if it’s not broken.
In addition, new formats are constantly coming out for flash cards, and when they do, you need a new reader to use them. For example, Secure Digital High Capacity (SDHC) and xD from Fujifilm are not supported by older readers.
#6: CD/DVD drives
Considering that it has moving, spinning parts, the average CD/DVD drive is actually fairly robust. Because of that, however, many people are still using old read-only (or CD RW) drives instead of amazingly cheap (and handy) DVD writers. If you’re still using an old drive and it finally gives up the ghost, you’ll probably be glad it did when you replace it with a DVD/CD RW combo drive for less than 50 dollars.
#7: Hard drives
The computer component we all least want to fail is the hard drive. It’s easier to cope with the loss of the much more expensive processor or video card as long as we still have our precious data, so your first instinct is to try to repair it. But if you’ve been practicing good backup habits, you can actually come out of the situation better off when you replace the old drive with something bigger and faster.
The “giant” 100-GB hard drive of a few years ago is no longer so large. Today, you can get 750 GB for less than 200 bucks. In addition to being much, much larger, newer hard drives will generally be Serial ATA II (SATA II), which has a maximum data transfer rate of about 300 MB/s as opposed to SATA I’s 150 MB/s and the older Parallel ATA (PATA) rate of 133 MB/s. SATA II is fairly new, so many motherboards don’t support it. But even if yours doesn’t, the SATA II drives generally have a jumper that can put them in SATA I mode.
TIP: Right now, most SATA II hard drives ship with this limiting jumper in place by default, so if your board does support SATA II, be sure to change the jumper before you install the drive.
#8: Monitor
With the exception of servers, a computer isn’t much good without a monitor. Monitors rarely make it all the way to the stage of completely not working, because we replace them when they start to fade. If you replace a monitor that’s more than a few years old, the new will likely not much resemble the old.
Any reluctance you may have had to switch from the giant 50-pound cathode ray tube (CRT) monitor to a slim and featherweight liquid crystal display (LCD) should be gone by now. The gap in performance in terms of color rendering and refresh rates between CRTs and LCDs is very small. Unless you’re a graphics designer who needs a multi-thousand dollar large screen CRT, the benefits of size, weight, power consumption, and less eye fatigue that LCDs enjoy will far outweigh any small performance advantages of a CRT. With the exception of the extremely high and extremely low end markets, it’s quite hard to find a new CRT monitor anyway.
If you were already using an LCD that’s a few years old, when you replace it you’ll enjoy those leaps in performance that the LCDs have made in the last few years.
#9: Keyboard
Since so many of us spend hours every day banging away at them, it’s important to have a keyboard that’s comfortable and efficient. And since we use them so much and often so brutally, it is no wonder that they break often. Keys come off, get stuck, or just get really dirty. When these things happen, you should usually go ahead and replace the keyboard rather than live with the hassle.
Today’s keyboards have new, handy features. Some have built in user-defined macro keys for often-repeated commands; some can fold up for easy transportability; some have built-in ports so they can double as USB hubs. There is a keyboard with some unique feature to suit nearly anyone’s needs.
#10: Motherboard and processor
Replacing the motherboard is always the most involved upgrade. Since it usually means “starting over” with a clean installation of the operating system, lots of people are reluctant to change to a newer board even when the old one gives up the ghost, preferring instead to replace it with the exact same model, thus avoiding having to wipe the OS. However, since a motherboard upgrade is the most involved, it also can give the widest range of benefits.
First and foremost, replacing the motherboard usually gives us the chance to upgrade to the latest processor technology. Today, you can get the benefits of a dual or even quad CPU setup with only one processor, thanks to multi-core technology, in which more than one processing core is placed on a single wafer. In a multitasking or multithreaded environment, this effectively increases your computer’s performance by a factor of two or four.
Additionally, upgrading the motherboard gives you access to new technologies for other components. PATA and SATA I hard drives (and optical drives) can be upgraded to SATA II. AGP video cards can be upgraded to PCI-E. USB 1.1 ports become USB 2.0. The list goes on for virtually every component. Sometimes, even though it can be a pain, starting over can be the best thing.
Sunday, December 9, 2007
10 types of programmers you’ll encounter in the field
Programmers enjoy a reputation for being peculiar people. In fact, even within the development community, there are certain programmer archetypes that other programmers find strange. Here are 10 types of programmers you are likely to run across. Can you think of any more?
#1: Gandalf
This programmer type looks like a short-list candidate to play Gandalf in The Lord of the Rings. He (or even she!) has a beard halfway to his knees, a goofy looking hat, and may wear a cape or a cloak in the winter. Luckily for the team, this person is just as adept at working magic as Gandalf. Unluckily for the team, they will need to endure hours of stories from Gandalf about how he or she to walk uphill both ways in the snow to drop off the punch cards at the computer room. The Gandalf type is your heaviest hitter, but you try to leave them in the rear and call them up only in times of desperation.
#2: The Martyr
In any other profession, The Martyr is simply a “workaholic.” But in the development field, The Martyr goes beyond that and into another dimension. Workaholics at least go home to shower and sleep. The Martyr takes pride in sleeping at the desk amidst empty pizza boxes. The problem is, no one ever asked The Martyr to work like this. And he or she tries to guilt-trip the rest of the team with phrases like, “Yeah, go home and enjoy dinner. I’ll finish up the next three week’s worth of code tonight.”
#3: Fanboy
Watch out for Fanboy. If he or she corners you, you’re in for a three-hour lecture about the superiority of Dragonball Z compared to Gundam Wing, or why the Playstation 3 is better than the XB 360. Fanboy’s workspace is filled with posters, action figures, and other knick-knacks related to some obsession, most likely imported from Japan. Not only are Fanboys obnoxious to deal with, they often put so much time into the obsession (both in and out of the office) that they have no clue when it comes to doing what they were hired to do.
#4: Vince Neil
This 40-something is a throwback to 1984 in all of the wrong ways. Sporting big hair, ripped stonewashed jeans, and a bandana here or there, Vince sits in the office humming Bon Jovi and Def Leppard tunes throughout the workday. This would not be so bad if “Pour Some Sugar on Me” was not so darned infectious.
Vince is generally a fun person to work with, and actually has a ton of experience, but just never grew up. But Vince becomes a hassle when he or she tries living the rock ‘n roll lifestyle to go with the hair and hi-tops. It’s fairly hard to work with someone who carries a hangover to work every day.
#5: The Ninja
The Ninja is your team’s MVP, and no one knows it. Like the legendary assassins, you do not know that The Ninja is even in the building or working, but you discover the evidence in the morning. You fire up the source control system and see that at 4 AM, The Ninja checked in code that addresses the problem you planned to spend all week working on, and you did not even know that The Ninja was aware of the project! See, while you were in Yet Another Meeting, The Ninja was working.
Ninjas are so stealthy, you might not even know their name, but you know that every project they’re on seems to go much more smoothly. Tread carefully, though. The Ninja is a lone warrior; don’t try to force him or her to work with rank and file.
#6: The Theoretician
The Theoretician knows everything there is to know about programming. He or she can spend four hours lecturing about the history of an obscure programming language or providing a proof of how the code you wrote is less than perfectly optimal and may take an extra three nanoseconds to run. The problem is, The Theoretician does not know a thing about software development. When The Theoretician writes code, it is so “elegant” that mere mortals cannot make sense of it. His or her favorite technique is recursion, and every block of code is tweaked to the max, at the expense of timelines and readability.
The Theoretician is also easily distracted. A simple task that should take an hour takes Theoreticians three months, since they decide that the existing tools are not sufficient and they must build new tools to build new libraries to build a whole new system that meets their high standards. The Theoretician can be turned into one of your best players, if you can get him or her to play within the boundaries of the project itself and stop spending time working on The Ultimate Sorting Algorithm.
#7: The Code Cowboy
The Code Cowboy is a force of nature that cannot be stopped. He or she is almost always a great programmer and can do work two or three times faster than anyone else. The problem is, at least half of that speed comes by cutting corners. The Code Cowboy feels that checking code into source control takes too long, storing configuration data outside of the code itself takes too long, communicating with anyone else takes too long… you get the idea.
The Code Cowboy’s code is a spaghetti code mess, because he or she was working so quickly that the needed refactoring never happened. Chances are, seven pages’ worth of core functionality looks like the “don’t do this” example of a programming textbook, but it magically works. The Code Cowboy definitely does not play well with others. And if you put two Code Cowboys on the same project, it is guaranteed to fail, as they trample on each other’s changes and shoot each other in the foot.
Put a Code Cowboy on a project where hitting the deadline is more important than doing it right, and the code will be done just before deadline every time. The Code Cowboy is really just a loud, boisterous version of The Ninja. While The Ninja executes with surgical precision, The Code Cowboy is a raging bull and will gore anything that gets in the way.
#8: The Paratrooper
You know those movies where a sole commando is air-dropped deep behind enemy lines and comes out with the secret battle plans? That person in a software development shop is The Paratrooper. The Paratrooper is the last resort programmer you send in to save a dying project. Paratroopers lack the patience to work on a long-term assignment, but their best asset is an uncanny ability to learn an unfamiliar codebase and work within it. Other programmers might take weeks or months to learn enough about a project to effectively work on it; The Paratrooper takes hours or days. Paratroopers might not learn enough to work on the core of the code, but the lack of ramp-up time means that they can succeed where an entire team might fail.
#9: Mediocre Man
“Good enough” is the best you will ever get from Mediocre Man. Don’t let the name fool you; there are female varieties of Mediocre Man too. And he or she always takes longer to produce worse code than anyone else on the team. “Slow and steady barely finishes the race” could describe Mediocre Man’s projects. But Mediocre Man is always just “good enough” to remain employed.
When you interview this type, they can tell you a lot about the projects they’ve been involved with but not much about their actual involvement. Filtering out the Mediocre Man type is fairly easy: Ask for actual details of the work they’ve done, and they suddenly get a case of amnesia. Let them into your organization, though, and it might take years to get rid of them.
#10: The Evangelist
No matter what kind of environment you have, The Evangelist insists that it can be improved by throwing away all of your tools and processes and replacing them with something else. The Evangelist is actually the opposite of The Theoretician. The Evangelist is outspoken, knows an awful lot about software development, but performs very little actual programming.
The Evangelist is secretly a project manager or department manager at heart but lacks the knowledge or experience to make the jump. So until The Evangelist is able to get into a purely managerial role, everyone else needs to put up with his or her attempts to revolutionize the workplace.
#1: Gandalf
This programmer type looks like a short-list candidate to play Gandalf in The Lord of the Rings. He (or even she!) has a beard halfway to his knees, a goofy looking hat, and may wear a cape or a cloak in the winter. Luckily for the team, this person is just as adept at working magic as Gandalf. Unluckily for the team, they will need to endure hours of stories from Gandalf about how he or she to walk uphill both ways in the snow to drop off the punch cards at the computer room. The Gandalf type is your heaviest hitter, but you try to leave them in the rear and call them up only in times of desperation.
#2: The Martyr
In any other profession, The Martyr is simply a “workaholic.” But in the development field, The Martyr goes beyond that and into another dimension. Workaholics at least go home to shower and sleep. The Martyr takes pride in sleeping at the desk amidst empty pizza boxes. The problem is, no one ever asked The Martyr to work like this. And he or she tries to guilt-trip the rest of the team with phrases like, “Yeah, go home and enjoy dinner. I’ll finish up the next three week’s worth of code tonight.”
#3: Fanboy
Watch out for Fanboy. If he or she corners you, you’re in for a three-hour lecture about the superiority of Dragonball Z compared to Gundam Wing, or why the Playstation 3 is better than the XB 360. Fanboy’s workspace is filled with posters, action figures, and other knick-knacks related to some obsession, most likely imported from Japan. Not only are Fanboys obnoxious to deal with, they often put so much time into the obsession (both in and out of the office) that they have no clue when it comes to doing what they were hired to do.
#4: Vince Neil
This 40-something is a throwback to 1984 in all of the wrong ways. Sporting big hair, ripped stonewashed jeans, and a bandana here or there, Vince sits in the office humming Bon Jovi and Def Leppard tunes throughout the workday. This would not be so bad if “Pour Some Sugar on Me” was not so darned infectious.
Vince is generally a fun person to work with, and actually has a ton of experience, but just never grew up. But Vince becomes a hassle when he or she tries living the rock ‘n roll lifestyle to go with the hair and hi-tops. It’s fairly hard to work with someone who carries a hangover to work every day.
#5: The Ninja
The Ninja is your team’s MVP, and no one knows it. Like the legendary assassins, you do not know that The Ninja is even in the building or working, but you discover the evidence in the morning. You fire up the source control system and see that at 4 AM, The Ninja checked in code that addresses the problem you planned to spend all week working on, and you did not even know that The Ninja was aware of the project! See, while you were in Yet Another Meeting, The Ninja was working.
Ninjas are so stealthy, you might not even know their name, but you know that every project they’re on seems to go much more smoothly. Tread carefully, though. The Ninja is a lone warrior; don’t try to force him or her to work with rank and file.
#6: The Theoretician
The Theoretician knows everything there is to know about programming. He or she can spend four hours lecturing about the history of an obscure programming language or providing a proof of how the code you wrote is less than perfectly optimal and may take an extra three nanoseconds to run. The problem is, The Theoretician does not know a thing about software development. When The Theoretician writes code, it is so “elegant” that mere mortals cannot make sense of it. His or her favorite technique is recursion, and every block of code is tweaked to the max, at the expense of timelines and readability.
The Theoretician is also easily distracted. A simple task that should take an hour takes Theoreticians three months, since they decide that the existing tools are not sufficient and they must build new tools to build new libraries to build a whole new system that meets their high standards. The Theoretician can be turned into one of your best players, if you can get him or her to play within the boundaries of the project itself and stop spending time working on The Ultimate Sorting Algorithm.
#7: The Code Cowboy
The Code Cowboy is a force of nature that cannot be stopped. He or she is almost always a great programmer and can do work two or three times faster than anyone else. The problem is, at least half of that speed comes by cutting corners. The Code Cowboy feels that checking code into source control takes too long, storing configuration data outside of the code itself takes too long, communicating with anyone else takes too long… you get the idea.
The Code Cowboy’s code is a spaghetti code mess, because he or she was working so quickly that the needed refactoring never happened. Chances are, seven pages’ worth of core functionality looks like the “don’t do this” example of a programming textbook, but it magically works. The Code Cowboy definitely does not play well with others. And if you put two Code Cowboys on the same project, it is guaranteed to fail, as they trample on each other’s changes and shoot each other in the foot.
Put a Code Cowboy on a project where hitting the deadline is more important than doing it right, and the code will be done just before deadline every time. The Code Cowboy is really just a loud, boisterous version of The Ninja. While The Ninja executes with surgical precision, The Code Cowboy is a raging bull and will gore anything that gets in the way.
#8: The Paratrooper
You know those movies where a sole commando is air-dropped deep behind enemy lines and comes out with the secret battle plans? That person in a software development shop is The Paratrooper. The Paratrooper is the last resort programmer you send in to save a dying project. Paratroopers lack the patience to work on a long-term assignment, but their best asset is an uncanny ability to learn an unfamiliar codebase and work within it. Other programmers might take weeks or months to learn enough about a project to effectively work on it; The Paratrooper takes hours or days. Paratroopers might not learn enough to work on the core of the code, but the lack of ramp-up time means that they can succeed where an entire team might fail.
#9: Mediocre Man
“Good enough” is the best you will ever get from Mediocre Man. Don’t let the name fool you; there are female varieties of Mediocre Man too. And he or she always takes longer to produce worse code than anyone else on the team. “Slow and steady barely finishes the race” could describe Mediocre Man’s projects. But Mediocre Man is always just “good enough” to remain employed.
When you interview this type, they can tell you a lot about the projects they’ve been involved with but not much about their actual involvement. Filtering out the Mediocre Man type is fairly easy: Ask for actual details of the work they’ve done, and they suddenly get a case of amnesia. Let them into your organization, though, and it might take years to get rid of them.
#10: The Evangelist
No matter what kind of environment you have, The Evangelist insists that it can be improved by throwing away all of your tools and processes and replacing them with something else. The Evangelist is actually the opposite of The Theoretician. The Evangelist is outspoken, knows an awful lot about software development, but performs very little actual programming.
The Evangelist is secretly a project manager or department manager at heart but lacks the knowledge or experience to make the jump. So until The Evangelist is able to get into a purely managerial role, everyone else needs to put up with his or her attempts to revolutionize the workplace.
Saturday, December 8, 2007
10 ways to avoid age-bias landmines during the interview process
The IT industry can be a cruel career sector. According to an industry survey just a few years ago, tech professionals are viewed as old and seniors (in terms of age) when they hit their early to mid-40s. And that isn’t the worst of it — while older professionals in most industries are valued for having more experience and expertise, it’s the opposite within the tech community.
If you fall into the category of older IT pros, you may encounter subtle age bias in questions and comments from interviewers. The trick is to identify the questions and know the best way to answer them, dismissing concerns about age right off the bat. Here are nine practice questions and suggested replies.
#1: Tell me about yourself
Focus on your experiences and goals that relate to the specific job for which you’re applying. Many experienced workers make the mistake of talking too much about their experience, especially the irrelevant parts. There’s no need to recap your entire resume. Keep it to five minutes or less and leave some space for the interviewer to ask follow-up questions.
#2: How would you describe yourself?
The employer may be concerned about your fitting in with younger workers, taking direction from a younger supervisor, and coping with a hectic schedule. Research studies by the American Association of Retired Persons (AARP) have found that many employers think older workers lack flexibility and adaptability, are reluctant to accept new technology, and have difficulty learning new skills.
Demonstrate a high energy level throughout the interview. Highlight examples of your willingness to learn and take on new projects, your latest technology skills, and your ability to remain flexible and/or handle stress.
#3: How old are you?
Although this is not an illegal question, it is a stupid question for an interviewer to ask. If you’re 40 or older, you’re protected by the Age Discrimination in Employment Act (ADEA). If the interviewer asks this question and does not hire you, he or she needs to be able to prove that you were passed over because you lacked the qualifications and not because of your age.
This question could also be a way to try to get an applicant to volunteer other personal information, such as family status or the desire to get pregnant, which are illegal questions. If you really want this position and feel that the interviewer has no discriminatory intentions, do not react negatively. Stress your skills and abilities to get the job done.
#4: You seem overqualified; why do you want this job?
This is the question that often cloaks subtle age discrimination. The employer may be questioning your goals or challenging your long-term commitment to the job. Also, a younger hiring manager might be intimidated by your experience or be uncomfortable supervising someone older. This question may give the interviewer the opportunity to ask about your salary, which leads to the cost excuse needed, or to say that you’d be “bored in this position.”
Indicate your sincere interest in working for the organization. Emphasize your unique attitudes, abilities, and interests that led you to apply for the job. Express your enthusiasm for the job and for the opportunity to learn. De-emphasize your many years of experience, but do stress the skills that relate to this particular position.
#5: Will you be comfortable working for someone younger?
Some employers may be concerned that midlife and older workers will be reluctant to accept younger people as managers and bosses. Age should not be a determining factor in leadership; both younger and older people are capable of leading and managing.
One response that can be very effective for dispelling this concern is, “I’ve had other managers who were younger than I am, and just like the older ones, some are better than others,” or “I’ve learned something from every manager I’ve had.”
#6: You haven’t worked for a long time; are you sure you can handle this job?
Give a quick all-purpose reason and then focus on what you’ve been doing in your downtime — upgrading skills, learning about new industries, etc.
#7: How is your health?
If you have an obvious physical disability that might affect your ability to do the particular job, you may want to indicate how you manage the disability for top job performance. According to the Americans with Disabilities Act (ADA), this question is illegal during the pre-offer stage. What the employer has a right to know at this point is whether the applicant can perform the essential functions of the job with or without reasonable accommodation. Due to the ADA, most employers are legally bound not to discriminate against persons with disabilities. Those who can be accommodated in the workplace have strong protections against employment discrimination.
Once a company hires you, it may not ask for specific medical information unless it affects your job performance. You need to know the HR policies regarding medical leave and what information needs to be communicated.
#8: We don’t have many employees who are your age; would that bother you?
Although federal law bars employers from considering a candidate’s age in making any employment decision, it’s possible that you’ll be asked age-related questions in an interview, perhaps out of the interviewer’s ignorance or perhaps to test your response.
Explain that you believe your age would be an asset to the organization. Emphasize that you’re still eager to learn and improve, and it doesn’t matter who helps you. The age of the people you work with is irrelevant. Be sure that you know your rights under the ADEA.
#9: What are your salary requirements?
Try to postpone responding to this question until a job offer has been made. If asked, provide a salary range that you’ve found during your job market investigation. You can obtain salary ranges by talking to people who work in the same field, reviewing industry journals and Internet sites, and analyzing comparable jobs. Based on your research, you can provide a salary range in line with the current market.
If you don’t have the range and you’re asked this question, ask the interviewer, “What salary range are you working with?” Chances are 50/50 that the interviewer will tell you. If the interviewer continues to press for an answer, say something like, “Although I’m not sure what this particular job is worth, people who do this sort of job generally make between $___ and $___.”
Be prepared
The issue of age discrimination in the tech industry isn’t new, and it’s certainly not dissipating any time soon. Although various federal agencies urge employers to look beyond myths and ages, pointing out that “many mid-career workers have a breadth of experience that could benefit many young IT companies,” a lot more can still be done on the regulatory and enforcement end.
In the meantime, older, skilled, experienced workers will continue to struggle to find full-time employment. But by learning to identify potential age bias, and knowing how best to respond to related questions, you can make a strong attempt to get past the age-issue hurdle.
If you fall into the category of older IT pros, you may encounter subtle age bias in questions and comments from interviewers. The trick is to identify the questions and know the best way to answer them, dismissing concerns about age right off the bat. Here are nine practice questions and suggested replies.
#1: Tell me about yourself
Focus on your experiences and goals that relate to the specific job for which you’re applying. Many experienced workers make the mistake of talking too much about their experience, especially the irrelevant parts. There’s no need to recap your entire resume. Keep it to five minutes or less and leave some space for the interviewer to ask follow-up questions.
#2: How would you describe yourself?
The employer may be concerned about your fitting in with younger workers, taking direction from a younger supervisor, and coping with a hectic schedule. Research studies by the American Association of Retired Persons (AARP) have found that many employers think older workers lack flexibility and adaptability, are reluctant to accept new technology, and have difficulty learning new skills.
Demonstrate a high energy level throughout the interview. Highlight examples of your willingness to learn and take on new projects, your latest technology skills, and your ability to remain flexible and/or handle stress.
#3: How old are you?
Although this is not an illegal question, it is a stupid question for an interviewer to ask. If you’re 40 or older, you’re protected by the Age Discrimination in Employment Act (ADEA). If the interviewer asks this question and does not hire you, he or she needs to be able to prove that you were passed over because you lacked the qualifications and not because of your age.
This question could also be a way to try to get an applicant to volunteer other personal information, such as family status or the desire to get pregnant, which are illegal questions. If you really want this position and feel that the interviewer has no discriminatory intentions, do not react negatively. Stress your skills and abilities to get the job done.
#4: You seem overqualified; why do you want this job?
This is the question that often cloaks subtle age discrimination. The employer may be questioning your goals or challenging your long-term commitment to the job. Also, a younger hiring manager might be intimidated by your experience or be uncomfortable supervising someone older. This question may give the interviewer the opportunity to ask about your salary, which leads to the cost excuse needed, or to say that you’d be “bored in this position.”
Indicate your sincere interest in working for the organization. Emphasize your unique attitudes, abilities, and interests that led you to apply for the job. Express your enthusiasm for the job and for the opportunity to learn. De-emphasize your many years of experience, but do stress the skills that relate to this particular position.
#5: Will you be comfortable working for someone younger?
Some employers may be concerned that midlife and older workers will be reluctant to accept younger people as managers and bosses. Age should not be a determining factor in leadership; both younger and older people are capable of leading and managing.
One response that can be very effective for dispelling this concern is, “I’ve had other managers who were younger than I am, and just like the older ones, some are better than others,” or “I’ve learned something from every manager I’ve had.”
#6: You haven’t worked for a long time; are you sure you can handle this job?
Give a quick all-purpose reason and then focus on what you’ve been doing in your downtime — upgrading skills, learning about new industries, etc.
#7: How is your health?
If you have an obvious physical disability that might affect your ability to do the particular job, you may want to indicate how you manage the disability for top job performance. According to the Americans with Disabilities Act (ADA), this question is illegal during the pre-offer stage. What the employer has a right to know at this point is whether the applicant can perform the essential functions of the job with or without reasonable accommodation. Due to the ADA, most employers are legally bound not to discriminate against persons with disabilities. Those who can be accommodated in the workplace have strong protections against employment discrimination.
Once a company hires you, it may not ask for specific medical information unless it affects your job performance. You need to know the HR policies regarding medical leave and what information needs to be communicated.
#8: We don’t have many employees who are your age; would that bother you?
Although federal law bars employers from considering a candidate’s age in making any employment decision, it’s possible that you’ll be asked age-related questions in an interview, perhaps out of the interviewer’s ignorance or perhaps to test your response.
Explain that you believe your age would be an asset to the organization. Emphasize that you’re still eager to learn and improve, and it doesn’t matter who helps you. The age of the people you work with is irrelevant. Be sure that you know your rights under the ADEA.
#9: What are your salary requirements?
Try to postpone responding to this question until a job offer has been made. If asked, provide a salary range that you’ve found during your job market investigation. You can obtain salary ranges by talking to people who work in the same field, reviewing industry journals and Internet sites, and analyzing comparable jobs. Based on your research, you can provide a salary range in line with the current market.
If you don’t have the range and you’re asked this question, ask the interviewer, “What salary range are you working with?” Chances are 50/50 that the interviewer will tell you. If the interviewer continues to press for an answer, say something like, “Although I’m not sure what this particular job is worth, people who do this sort of job generally make between $___ and $___.”
Be prepared
The issue of age discrimination in the tech industry isn’t new, and it’s certainly not dissipating any time soon. Although various federal agencies urge employers to look beyond myths and ages, pointing out that “many mid-career workers have a breadth of experience that could benefit many young IT companies,” a lot more can still be done on the regulatory and enforcement end.
In the meantime, older, skilled, experienced workers will continue to struggle to find full-time employment. But by learning to identify potential age bias, and knowing how best to respond to related questions, you can make a strong attempt to get past the age-issue hurdle.
Friday, December 7, 2007
10 common Web design mistakes to watch out for
When you start designing a Web site, your options are wide open. Yet all that potential can lead to problems that may cause your Web site to fall short of your goals. The following list of design mistakes addresses the needs of commercial Web sites, but it can easily be applied to personal and hobby sites and to professional nonprofit sites as well.
#1: Failing to provide information that describes your Web site
Every Web site should be very clear and forthcoming about its purpose. Either include a brief descriptive blurb on the home page of your Web site or provide an About Us (or equivalent) page with a prominent and obvious link from the home page that describes your Web site and its value to the people visiting it.
It’s even important to explain why some people may not find it useful, providing enough information so that they won’t be confused about the Web site’s purpose. It’s better to send away someone uninterested in what you have to offer with a clear idea of why he or she isn’t interested than to trick visitors into wasting time finding this out without your help. After all, a good experience with a Web site that is not useful is more likely to get you customers by word of mouth than a Web site that is obscure and difficult to understand.
#2: Skipping alt and title attributes
Always make use of the alt and title attributes for every XHTML tag on your Web site that supports them. This information is of critical importance for accessibility when the Web site is visited using browsers that don’t support images and when more information than the main content might otherwise be needed.
The most common reason for this need is accessibility for the disabled, such as blind visitors who use screen readers to surf the Web. Just make sure you don’t include too much text in the alt or title attribute — the text should be short, clear, and to the point. You don’t want to inundate your visitors with paragraph after paragraph of useless, vague information in numerous pop-up messages. The purpose of alt and title tags is, in general, to enhance accessibility.
#3: Changing URLs for archived pages
All too often, Web sites change URLs of pages when they are outdated and move off the main page into archives. This can make it extremely difficult to build up significantly good search engine placement, as links to pages of your Web site become broken. When you first create your site, do so in a manner that allows you to move content into archives without having to change the URL. Popularity on the Web is built on word of mouth, and you won’t be getting any of that publicity if your page URLs change every few days.
#4: Not dating your content
In general, you must update content if you want return visitors. People come back only if there’s something new to see. This content needs to be dated, so that your Web site’s visitors know what is new and in what order it appeared. Even in the rare case that Web site content does not change regularly, it will almost certainly change from time to time — if only because a page needs to be edited now and then to reflect new information.
Help your readers determine what information might be out of date by date stamping all the content on your Web site somehow, even if you only add “last modified on” fine print at the bottom of every content page. This not only helps your Web site’s visitors, but it also helps you: The more readers understand that any inconsistencies between what you’ve said and what they read elsewhere is a result of changing information, the more likely they are to grant your words value and come back to read more.
#5: Creating busy, crowded pages
Including too much information in one location can drive visitors away. The common-sense tendency is to be as informative as possible, but you should avoid providing too much of a good thing. When excessive information is provided, readers get tired of reading it after a while and start skimming. When that gets old, they stop reading altogether.
Keep your initial points short and relevant, in bite-size chunks, with links to more in-depth information when necessary. Bulleted lists are an excellent means of breaking up information into sections that are easily digested and will not drive away visitors to your Web site. The same principles apply to lists of links — too many links in one place becomes little more than line noise and static. Keep your lists of links short and well-organized so that readers can find exactly what they need with little effort. Visitors will find more value in your Web site when you help them find what they want and make it as easily digestible as possible.
#6: Going overboard with images
With the exception of banners and other necessary branding, decorative images should be used as little as possible. Use images to illustrate content when it is helpful to the reader, and use images when they themselves are the content you want to provide. Do not strew images over the Web site just to pretty it up or you’ll find yourself driving away visitors. Populate your Web site with useful images, not decorative ones, and even those should not be too numerous. Images load slowly, get in the way of the text your readers seek, and are not visible in some browsers or with screen readers. Text, on the other hand, is universal.
#7: Implementing link indirection, interception, or redirection
Never prevent other Web sites from linking directly to your content. There are far too many major content providers who violate this rule, such as news Web sites that redirect links to specific articles so that visitors always end up at the home page. This sort of heavy-handed treatment of incoming visitors, forcing them to the home page of the Web site as if they can force visitors to be interested in the rest of the content on the site, just drives people away in frustration. When they have difficulty finding an article, your visitors may give up and go elsewhere for information. Perhaps worse, incoming links improve your search engine placement dramatically — and by making incoming links fail to work properly, you discourage others from linking to your site. Never discourage other Web sites from linking to yours.
#8: Making new content difficult to recognize or find
In #4, we mentioned keeping content fresh and dating it accordingly. Here’s another consideration: Any Web site whose content changes regularly should make the changes easily available to visitors. New content today should not end up in the same archive as material from three years ago tomorrow, especially with no way to tell the difference.
New content should stay fresh and new long enough for your readers to get some value from it. This can be aided by categorizing it, if you have a Web site whose content is updated very quickly (like Slashdot). By breaking up new items into categories, you can ensure that readers will still find relatively new material easily within specific areas of interest. Effective search functionality and good Web site organization can also help readers find information they’ve seen before and want to find again. Help them do that as much as possible.
#9: Displaying thumbnails that are too small to be helpful
When providing image galleries with large numbers of images, linking to them from lists of thumbnails is a common tactic. Thumbnail images are intended to give the viewer an idea of what the main image looks like, so it’s important to avoid making them too small.
It’s also important to produce scaled-down and/or cropped versions of your main images, rather than to use XHTML and CSS to resize the images. When images are resized using markup, the larger image size is still being sent to the client system — to the visitor’s browser. When loading a page full of thumbnails that are actually full-size images resized by markup and stylesheets, a browser uses a lot of processor and memory resources. This can lead to browser crashes and other problems or, at the very least, cause extremely slow load times. Slow load times cause Web site visitors to go elsewhere. Browser crashes are even more effective at driving visitors away.
#10: Forgoing Web page titles
Many Web designers don’t set the title of their Web pages. This is obviously a mistake, if only because search engines identify your Web site by page titles in the results they display, and saving a Web page in your browser’s bookmarks uses the page title for the bookmark name by default.
A less obvious mistake is the tendency of Web designers to use the same title for every page of the site. It would be far more advantageous to provide a title for every page that identifies not only the Web site, but the specific page. Of course, the title should still be short and succinct. A Web page title that is too long is almost as bad as no Web page title at all.
Achieving success
These considerations for Web design are important, but they’re often overlooked or mishandled. A couple of minor failures can be overcome by successes in other areas, but it never pays to shoot yourself in the foot just because you have another foot to use. Enhance your Web site’s chances of success by keeping these design principles in mind.
#1: Failing to provide information that describes your Web site
Every Web site should be very clear and forthcoming about its purpose. Either include a brief descriptive blurb on the home page of your Web site or provide an About Us (or equivalent) page with a prominent and obvious link from the home page that describes your Web site and its value to the people visiting it.
It’s even important to explain why some people may not find it useful, providing enough information so that they won’t be confused about the Web site’s purpose. It’s better to send away someone uninterested in what you have to offer with a clear idea of why he or she isn’t interested than to trick visitors into wasting time finding this out without your help. After all, a good experience with a Web site that is not useful is more likely to get you customers by word of mouth than a Web site that is obscure and difficult to understand.
#2: Skipping alt and title attributes
Always make use of the alt and title attributes for every XHTML tag on your Web site that supports them. This information is of critical importance for accessibility when the Web site is visited using browsers that don’t support images and when more information than the main content might otherwise be needed.
The most common reason for this need is accessibility for the disabled, such as blind visitors who use screen readers to surf the Web. Just make sure you don’t include too much text in the alt or title attribute — the text should be short, clear, and to the point. You don’t want to inundate your visitors with paragraph after paragraph of useless, vague information in numerous pop-up messages. The purpose of alt and title tags is, in general, to enhance accessibility.
#3: Changing URLs for archived pages
All too often, Web sites change URLs of pages when they are outdated and move off the main page into archives. This can make it extremely difficult to build up significantly good search engine placement, as links to pages of your Web site become broken. When you first create your site, do so in a manner that allows you to move content into archives without having to change the URL. Popularity on the Web is built on word of mouth, and you won’t be getting any of that publicity if your page URLs change every few days.
#4: Not dating your content
In general, you must update content if you want return visitors. People come back only if there’s something new to see. This content needs to be dated, so that your Web site’s visitors know what is new and in what order it appeared. Even in the rare case that Web site content does not change regularly, it will almost certainly change from time to time — if only because a page needs to be edited now and then to reflect new information.
Help your readers determine what information might be out of date by date stamping all the content on your Web site somehow, even if you only add “last modified on” fine print at the bottom of every content page. This not only helps your Web site’s visitors, but it also helps you: The more readers understand that any inconsistencies between what you’ve said and what they read elsewhere is a result of changing information, the more likely they are to grant your words value and come back to read more.
#5: Creating busy, crowded pages
Including too much information in one location can drive visitors away. The common-sense tendency is to be as informative as possible, but you should avoid providing too much of a good thing. When excessive information is provided, readers get tired of reading it after a while and start skimming. When that gets old, they stop reading altogether.
Keep your initial points short and relevant, in bite-size chunks, with links to more in-depth information when necessary. Bulleted lists are an excellent means of breaking up information into sections that are easily digested and will not drive away visitors to your Web site. The same principles apply to lists of links — too many links in one place becomes little more than line noise and static. Keep your lists of links short and well-organized so that readers can find exactly what they need with little effort. Visitors will find more value in your Web site when you help them find what they want and make it as easily digestible as possible.
#6: Going overboard with images
With the exception of banners and other necessary branding, decorative images should be used as little as possible. Use images to illustrate content when it is helpful to the reader, and use images when they themselves are the content you want to provide. Do not strew images over the Web site just to pretty it up or you’ll find yourself driving away visitors. Populate your Web site with useful images, not decorative ones, and even those should not be too numerous. Images load slowly, get in the way of the text your readers seek, and are not visible in some browsers or with screen readers. Text, on the other hand, is universal.
#7: Implementing link indirection, interception, or redirection
Never prevent other Web sites from linking directly to your content. There are far too many major content providers who violate this rule, such as news Web sites that redirect links to specific articles so that visitors always end up at the home page. This sort of heavy-handed treatment of incoming visitors, forcing them to the home page of the Web site as if they can force visitors to be interested in the rest of the content on the site, just drives people away in frustration. When they have difficulty finding an article, your visitors may give up and go elsewhere for information. Perhaps worse, incoming links improve your search engine placement dramatically — and by making incoming links fail to work properly, you discourage others from linking to your site. Never discourage other Web sites from linking to yours.
#8: Making new content difficult to recognize or find
In #4, we mentioned keeping content fresh and dating it accordingly. Here’s another consideration: Any Web site whose content changes regularly should make the changes easily available to visitors. New content today should not end up in the same archive as material from three years ago tomorrow, especially with no way to tell the difference.
New content should stay fresh and new long enough for your readers to get some value from it. This can be aided by categorizing it, if you have a Web site whose content is updated very quickly (like Slashdot). By breaking up new items into categories, you can ensure that readers will still find relatively new material easily within specific areas of interest. Effective search functionality and good Web site organization can also help readers find information they’ve seen before and want to find again. Help them do that as much as possible.
#9: Displaying thumbnails that are too small to be helpful
When providing image galleries with large numbers of images, linking to them from lists of thumbnails is a common tactic. Thumbnail images are intended to give the viewer an idea of what the main image looks like, so it’s important to avoid making them too small.
It’s also important to produce scaled-down and/or cropped versions of your main images, rather than to use XHTML and CSS to resize the images. When images are resized using markup, the larger image size is still being sent to the client system — to the visitor’s browser. When loading a page full of thumbnails that are actually full-size images resized by markup and stylesheets, a browser uses a lot of processor and memory resources. This can lead to browser crashes and other problems or, at the very least, cause extremely slow load times. Slow load times cause Web site visitors to go elsewhere. Browser crashes are even more effective at driving visitors away.
#10: Forgoing Web page titles
Many Web designers don’t set the title of their Web pages. This is obviously a mistake, if only because search engines identify your Web site by page titles in the results they display, and saving a Web page in your browser’s bookmarks uses the page title for the bookmark name by default.
A less obvious mistake is the tendency of Web designers to use the same title for every page of the site. It would be far more advantageous to provide a title for every page that identifies not only the Web site, but the specific page. Of course, the title should still be short and succinct. A Web page title that is too long is almost as bad as no Web page title at all.
Achieving success
These considerations for Web design are important, but they’re often overlooked or mishandled. A couple of minor failures can be overcome by successes in other areas, but it never pays to shoot yourself in the foot just because you have another foot to use. Enhance your Web site’s chances of success by keeping these design principles in mind.
Subscribe to:
Posts (Atom)