Tuesday, October 28, 2008

Windows Home Server - Real-life scenario

I’ve been running Windows Home Server for just under a year now and thought I’d take a little time to explain my setup in detail and explain why I use this product when I could also simply build a Linux server to do many of the things handled by WHS.

My setup
Late last year, I bought an HP MediaSmart EX470 Windows Home Server for a project I was working on. Prior to buying the MediaSmart system, I had built a custom system with an evaluation copy of Windows Home Server provided by Microsoft, and gave it up in favor of the HP server. The HP MediaSmart systems ship with a paltry 512MB of RAM, but, with a little know-how, it’s not all that hard to upgrade to 2GB of RAM, which is almost a must. Frankly, HP will probably have to address the RAM issue at some point and give customers the option of easily expanding the RAM without voiding the warranty. The EX470 ships with a single 500GB hard drive. In order to enjoy the full benefit of Windows Home Server, you really need multiple hard drives. Since installing my server, I’ve added three more 500GB drives for a total of 2TB capacity. While that sounds like a ton of space, due to the way that WHS uses disk space, it’s actually less than it sounds like. This is not meant to be a negative point… just fact.

The MediaSmart server includes a gigabit Ethernet port and I’ve connected it, as well as my two primary workstations, to a gigabit Ethernet switch. I also use a wireless-N network at home to connect my wife’s Windows desktop computer and my MacBook to the network. I run VMware Fusion on my MacBook so I can run Windows programs.

How I use WHS
I save almost everything to my Windows Home Server. I write a lot, so all of my work is stored there, as is my iTunes library, backups of my DVDs and a lot more. All of the computers in my house are automatically backed up to my server, too. I have personally used WHS’ client restoration capability to restore a client computer and it’s an absolutely fantastic and surprisingly easy to use procedure.

Although WHS Power Pack 1 now includes the ability to backup the Windows Home Server to an external hard drive, a feature that was missing from the OEM release, I’ve opted to use Windows Home Server Gold Plan ($199/year, but right now, $99/year special) to automatically back up mu Windows Home Server to KeepVault’s servers. I’ve been using KeepVault for almost a year now and am very pleased. The only disadvantage to this method is that KeepVault won’t back up files that are larger than 5GB in size, but KeepVault provides unlimited storage space. The only files I have that are larger than 5GB in size are generally ISO files and virtual machine images and, if I so desired, I could take steps to protect even these files. However, for performance reasons, I don’t run my virtual machines from my server anyway, although I would give it a shot if WHS included a good way to handle iSCSI.

With the Power Pack 1 release, WHS is finally ready for prime time. Prior to this release, WHS suffered from a serious data corruption bug which, unfortunately, I feel victim to. The resulting damage was more of an annoyance as I had to work around it, but as I said, PP1 fixes this issue and adds some additional capability.

Windows Home Server includes very good remote access capability, too. When I’m on the road for business, I don’t have to try to remember exactly which files I need to take with me. If I forget something, I can just browse to my server and get the file. Configuring this capability is a breeze, too, as long as you have a router that supports uPnP, which I do. Otherwise, it would take manual router configuration, making WHS less than desirable for the average home user.

Could I have replicated this functionality with Linux, other open source products and some scripts? Sure. Would it have worked. Well, probably not as seamlessly. Even something like WHS is a tool for me and I’ve gotten to a point where I just need stuff to work so that I can focus on getting a job done. My WHS system protects my files at two levels-locally in the event of a client failure, and remotely in the event of a server failure-and gives me an easy way to get to my information if necessary.

Although the market need is still somewhat questionable, WHS is aimed at users that lack the technical expertise to build computers from scratch or that want to focus on the end result of the product-a working, stable server. For those that enjoy the thrill of building something from scratch, WHS is probably not for you. For me, however, it’s a perfect complement to my clients and perfectly fits my work style.

Help! My SQL Server Log File is too big!!!

Over the years, I have assisted so many different clients whose transactional log file has become “too large” that I thought it would be helpful to write about it. The issue can be a system crippling problem, but can be easily avoided. Today I’ll look at what causes your transaction logs to grow too large, and what you can do to curb the problem.

Note: For the purposes of today’s article, I will assume that you’re using SQL Server 2005 or later.

Every SQL Server database has at least two files; a data file and a transaction log file. The data file stores user and system data while the transaction log file stores all transactions and database modifications made by those transactions. As time passes, more and more database transactions occur and the transaction log needs to be maintained. If your database is in the Simple recovery mode, then the transaction log is truncated of inactive transaction after the Checkpoint process occurs. The Checkpoint process writes all modified data pages from memory to disk. When the Checkpoint is performed, the inactive portion of the transaction log is marked as reusable.

Transaction Log Backups
If your database recovery model is set to Full or Bulk-Logged, then it is absolutely VITAL that you make transaction log backups to go along with your full backups. SQL Server 2005 databases are set to the Full recovery model by default, so you may need to start creating log backups even if you haven’t ran into problems yet. The following query can be used to determine the recovery model of the databases on your SQL Server instance.

SELECT name, recovery_model_descFROM sys.databasesBefore going into the importance of Transactional Log Backups, I must criticize the importance of creating Full database backups. If you are not currently creating Full database backups and your database contains data that you cannot afford to lose, you absolutely need to start. Full backups are the starting point for any type of recovery process, and are critical to have in case you run into trouble. In fact, you cannot create transactional log backups without first having created a full backup at some point.

The Full or Bulk-logged Recovery Mode
With the Full or Bulk-Logged recovery mode, inactive transactions remain in the transaction log file until after a Checkpoint is processed and a transaction log backup is made. Note that a full backup does not remove inactive transactions from the transaction log. The transaction log backup performs a truncation of the inactive portion of the transaction log, allowing it to be reused for future transactions. This truncation does not shrink the file, it only allows the space in the file to be reused (more on file shrinking a bit later). It is these transaction log backups that keep your transaction log file from growing too large. An easy way to make consistent transaction log backups is to include them as part of your database maintenance plan.

If your database recovery model is set to FULL, and you’re not creating transaction log backups and never have, you may want to consider switching your recovery mode to Simple. The Simple recovery mode should take care of most of your transaction log growth problems because the log truncation occurs after the Checkpoint process. You’ll not be able to recover your database to a point in time using Simple, but if you weren’t creating transactional log backups to begin with, restoring to a point in time wouldn’t have been possible anyway. To switch your recovery model to Simple mode, issue the following statement in your database.

ALTER DATABASE YourDatabaseNameSET RECOVERY SIMPLENot performing transaction log backups is probably the main cause for your transaction log growing too large. However, there are other situations that prevent inactive transactions from being removed even if you’re creating regular log backups. The following query can be used to get an idea of what might be preventing your transaction log from being truncated.

SELECT name, log_reuse_wait_descFROM sys.databasesLong-Running Active TransactionsA long running transaction can prevent transaction log truncation. These types of transactions can range from transactions being blocked from completing to open transactions waiting for user input. In any case, the transaction ensures that the log remain active from the start of the transaction. The longer the transaction remains open, the larger the transaction log can grow. To see the longest running transaction on your SQL Server instance, run the following statement.

DBCC OPENTRANIf there are open transactions, DBCC OPENTRAN will provide a session_id (SPID) of the connection that has the transaction open. You can pass this session_id to sp_who2 to determine which user has the connection open.

EXECUTE sp_who2 spidAlternatively, you can run the following query to determine the user.

SELECT * FROM sys.dm_exec_sessionsWHERE session_id = spid --from DBCC OPENTRANYou can determine the SQL statement being executed inside the transactions a couple of different ways. First, you can use the DBCC INPUTBUFFER() statement to return the first part of the SQL statement

DBCC INPUTBUFFER(spid) --from DBCC OPENTRANAlternatively, you can use a dynamic management view included in SQL Server 2005 to return the SQL statement:

SELECT

r.session_id,

r.blocking_session_id,

s.program_name,

s.host_name,

t.text

FROM

sys.dm_exec_requests r

INNER JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id

CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t

WHERE

s.is_user_process = 1 AND

r.session_id = SPID --FROM DBCC OPENTRANBackupsLog truncation cannot occur during a backup or restore operation. In SQL Server 2005 and later, you can create a transaction log backup while a full or differential backup is occurring, but the log backup will not truncate the log due to the fact that the entire transaction log needs to remain available to the backup operation. If a database backup is keeping your log from being truncated you might consider cancelling the backup to relieve the immediate problem.

Transactional ReplicationWith transactional replication, the inactive portion of the transaction log is not truncated until transactions have been replicated to the distributor. This may be due to the fact that the distributor is overloaded and having problems accepting these transactions or maybe because the Log Reader agent should be ran more often. IF DBCC OPENTRAN indicates that your oldest active transaction is a replicated one and it has been open for a significant amount of time, this may be your problem.

Database MirroringDatabase mirroring is somewhat similar to transactional replication in that it requires that the transactions remain in the log until the record has been written to disk on the mirror server. If the mirror server instance falls behind the principal server instance, the amount of active log space will grow. In this case, you may need to stop database mirroring, take a log backup that truncates the log, apply that log backup to the mirror database and restart mirroring.

Disk SpaceIt is possible that you’re just running out of disk space and it is causing your transaction log to error. You might be able to free disk space on the disk drive that contains the transaction log file for the database by deleting or moving other files. The freed disk space will allow for the log file to enlarge. If you cannot free enough disk space on the drive that currently contains the log file then you may need to move the file to a drive with enough space to handle the log. If your log file is not set to grow automatically, you’ll want to consider changing that or adding additional space to the file. Another option is to create a new log file for the database on a different disk that has enough space by using the ALTER DATABASE YourDatabaseName ADD LOG FILE syntax.

Shrinking the FileOnce you have identified your problem and have been able to truncate your log file, you may need to shrink the file back to a manageable size. You should avoid shrinking your files on a consistent basis as it can lead to fragmentation issues. However, if you’ve performed a log truncation and need your log file to be smaller, you’re going to need to shrink your log file. You can do it through management studio by right clicking the database, selecting All Tasks, Shrink, then choose Database or Files. If I am using the Management Studio interface, I generally select Files and shrink only the log file.

This can also be done using TSQL. The following query will find the name of my log file. I’ll need this to pass to the DBCC SHRINKFILE command.

SELECT nameFROM sys.database_filesWHERE type_desc = 'LOG'Once I have my log file name, I can use the DBCC command to shrink the file. In the following command I try to shrink my log file down to 1GB.

DBCC SHRINKFILE ('SalesHistory_Log', 1000)Also, make sure that your databases are NOT set to auto-shrink. Databases that are shrank at continuous intervals can encounter real performance problems.

TRUNCATE_ONLY and NOLOGIf you’re a DBA and have ran into one of the problems listed in this article before, you might be asking yourself why I haven’t mentioned just using TRUNCATE_ONLY to truncate the log directly without creating the log backup. The reason is that in almost all circumstances you should avoid doing it. Doing so breaks the transaction log chain, which makes recovering to a point in time impossible because you have lost transactions that have occurred not only since the last transaction log backup but will not able to recovery any future transactions that occur until a differential or full database backup has been created. This method is so discouraged that Microsoft is not including it in SQL Server 2008 and future versions of the product. I’ll include the syntax here to be thorough, but you should try to avoid using it at all costs.

BACKUP LOG SalesHistoryWITH TRUNCATE_ONLYIt is just as easy to perform the following BACKUP LOG statement to actually create the log backup to disk.

BACKUP LOG SalesHistoryTO DISK = 'C:/SalesHistoryLog.bak'Moving forward
Today I took a look at several different things that can cause your transaction log file to become too large and some ideas as to how to overcome your problems. These solutions range from correcting your code so that transactions do not remain open so long, to creating more frequent log backups. In additional to these solutions, you should also consider adding notifications to your system to let you know when your database files are reaching a certain threshold. The more proactive you are in terms of alerts for these types of events, the better chance you’ll have to correct the issue before it turns into a real problem.

The top four mistakes organizations make when building datacenters

I had the opportunity to speak with Etienne Guerou, who is the Vice President, a company - a world leader in power solutions. Over a cup of coffee, Mr. Guerou, who has 20 years of experience in designing and building datacenters, briefed me on some of the top mistakes that IT professionals and decision-makers make when building their own datacenters.

Here are the main mistakes he outlined:

1. Harboring the wrong appreciation of a datacenter

One typical mistake is that would be that IT professionals and decision makers don’t differentiate between datacenters. Instead, they treat a datacenter as an all-inclusive black box where “many” servers are to be housed. That mindset is typically exposed when confronted with the simple question: “What do you intend to use your datacenter for?”

Ask yourself about the scale and anticipated usage of the datacenter, expansion plans of at least two to three years down the road, whether blade servers or standard rack mount servers will be utilized, etc. When you answer these questions, you can then extrapolate power consumption, as well as current and future capacities in terms of cabling, cooling and power.

2. Attempting to run a datacenter from improper facilities

It would be a mistake to simply acquire an ad-hoc facility and have it rebadged as a datacenter without a proper appraisal of its suitability, cautions Guerou. He cites an example in which a client, after having signed the lease for a fairly large space, sought out Mr. Guerou’s advice on how to proceed. To the client’s horror, the answer is that the venue was simply not suitable for a datacenter due to granite floorings and thick beams across the ceiling - resulting in an effective height that is simply inadequate for cabling and cooling purposes.

While it might not be possible for most organizations to put up custom-built datacenters on whim, what this client should have done was get an experienced consultant in and involved right from the get-go.

Ideally, in Mr. Guerou’s own words: “A datacenter should be a technical building dedicated to a very particular business of processing data.”

3. Buying by brands

Another common mistake is that many IT professionals attempt to buy into selected brands. While this strategy might work well when it comes to standardizing on servers or networking gear, an efficient and well-run datacenter has nothing to do with specific hardware brands or models. Rather, you should approach the datacenter from the perspective of a complete solution, where the entire design has to be considered as an integrated whole.

As an unfortunate side-effect of strong marketing by enterprise vendors, many users have, consciously or subconsciously, bought into the idea of designing a datacenter by snapping together disparate pieces of hardware. While not wrong, it’s imperative that the end-result be evaluated as a whole - and not in a piecemeal fashion.

Various hardware, such as types of servers, positioning of racks, networking equipment and redundant power supplies should dovetail properly with infrastructure such as cooling, ventilation, wiring, fire-suppression systems, and security measures.

4. Rushing onto the “Green IT” bandwagon

The increasingly popularity of “Green IT” has vendors unveiling new servers and equipment touted for their superior power efficiency. While the idea is definitely laudable, you should sieve out the marketing hype from actual operational consumption.

For example, two UPS from “vendor X,” while individually more power efficient at 90% loading, might actually offer a much poorer showing if deployed in a redundant configuration, where they will end up running at 45% loading. In the absence of proper scrutiny, green IT initiatives could degenerate into a numbers game.

At the end of the day, you should take overall power efficiency - or power factor, of the entire datacenter -a benchmark rather than weigh it by individual vendor claims. After all, a yardstick of a well-run datacenter has always been about power efficiency.

In parting, Mr. Guerou has the following advice for organizations thinking of building their own datacenter. “Hire an experienced consultant.”

iSCSI is the future of storage

This week, HP announced their $360 million acquisition of LeftHand networks. Last year, Dell surprised the tech industry with a $1.4 billion purchase of the formerly independent EqualLogic. With these iSCSI snap-ups by true tech titans, iSCSI has officially arrived, is here to stay, and, I believe, will become the technology of choice for most organizations in the future.

This is not to say that iSCSI has been sitting in the background up to this point. On the contrary, the technology has taken the industry by storm. Both of these companies based their entire business hopes on the possibility that organizations would see the intrinsic value to be found in iSCSI’s simplistic installation and management. To say that both companies have been successful would be an understatement.

I’m a big fan of both EqualLogic and LeftHand Networks offerings, having purchased an EqualLogic unit in a former life. At that time, I narrowed my selection down to two options - LeftHand and EqualLogic. Both solutions had their pros and cons, but both were more than viable.

It’s not all about EqualLogic and LeftHand, though. The big guns in storage have finally jumped feet first into the iSCSI fray with extremely compelling products of their own. Previously, these players, including EMC and NetApp, simply bolted iSCSI onto existing products. Lately, even the biggest Fibre Channel vendors are releasing native iSCSI arrays aimed at the mid-tier of the market. EMC’s AX4, for example, is available in both native iSCSI and native Fibre Channel versions and is priced in such a way that any organization considering EqualLogic or LeftHand should make sure to give the EMC AX4 a look. To be fair, the iSCSI-only AX4:

Does not support SAN copy for SAN to SAN replication
Is not as easy to install or manage as one of the aforementioned devices, but isn’t bad either
The bandwidth to the array does not increase as additional space is added
It does not include thin provisioning, although this was rumored to be rectified in a future software release
The AX4 supports up to 64 attached hosts
But, the price per TB is simply incredible and a solution based on a different vendor would not have been attainable. This year, I purchased just shy of 14 TB of raw space on a pair of AX4 arrays-4.8 TB SAS and 9 TB SATA-for under $40K. For the foreseeable future, I don’t need SAN copy and space can be managed in ways other than through thin provisioning. Over time, we’ll run about two dozen virtual machines on the AX4 along with our administrative databases and Exchange 2007 databases. By the time I need additional features, the AX4 will be due for replacement anyway.

iSCSI started out at the low end of the market, helping smaller organizations begin to move toward shared storage and away from direct attached solutions. As time goes on, iSCSI is moving up the food chain and, in many cases, is supplanting small and mid-sized Fibre Channel arrays, particularly in organizations that have never had a SAN before. As iSCSI continues to take advantage of high-speed SAS disks and begins to use 10Gb Ethernet for a transport mechanism, I see iSCSI continuing to move higher into the market. Of course, faster, more reliable disks and faster networking capabilities will begin to close the savings gap between iSCSI and Fibre Channel, but iSCSI’s reliance on Ethernet for an underlying transport mechanism brings major simplicity to the storage equation and I doubt that iSCSI’s costs will ever surpass Fibre Channel anyway, mainly due to the expensive networking hardware needed for significant Fibre Channel implementations.

Even though iSCSI will continue to make inroads further into many organizations, I don’t think that iSCSI will ever completely push Fibre Channel out of the way. Many organizations rely on the raw performance afforded by Fibre Channel and the folks behind Fibre Channel’s specifications aren’t sitting still. Every year brings advances to Fibre Channel, including faster disks and improved connection speeds.

In short, I see the iSCSI market continuing to grow very rapidly and, over time, supplanting what would have been Fibre Channel installations. Further, as organizations continue to expand their storage infrastructures, iSCSI will be a very strong contender, particularly as the solution is updated to take advantage of improvements to the networking speed and disk performance.

Introduction to Policy-Based Management in SQL Server 2008

Policy-Based Management in SQL Server 2008 allows the database administrator to define policies that tie to database instances and objects. These policies allow the Database Administrator (DBA) to specify rules for which objects and their properties are created, or modified. An example of this would be to create a database-level policy that disallows the AutoShrink property to be enabled for a database. Another example would be a policy that ensures the name of all table triggers created on a database table begins with tr_.



As with any new SQL Server technology (or Microsoft technology in general), there is a new object naming nomenclature associated with Policy-Based Management. Below is a listing of some of the new base objects.

PolicyA Policy is a set of conditions specified on the facets of a target. In other words, a Policy is basically a set of rules specified for properties of database or server objects.

TargetA Target is an object that is managed by Policy-Based Management. Includes objects such as the database instance, a database, table, stored procedure, trigger, or index.

FacetA Facet is a property of an object (target) that can be involved in Policy Based Management. An example of a Facet is the name of a Trigger or the AutoShrink property of a database.

ConditionA Condition is the criteria that can be specify for a Target’s Facets. For example, you can set a condition for a Fact that specifies that all stored procedure names in the Schema ‘Banking’ begin with the name ‘bnk_’.



You can also assign a policy to a category. This allows you manage a set of policies assigned to the same category. A policy belongs to only one category.

Policy Evaluation Modes
A Policy can be evaluated in a number of different ways:



On demand - The policy is evaluated only when directly ran by the administrator.
On change: prevent - DDL triggers are used to prevent policy violations.
On change: log only - Event notifications are used to check a policy when a change is made.
On schedule - A SQL Agent job is used to periodically check policies for violations.
Advantages of Policy Based ManagementPolicy-Based Management gives you much more control over your database procedures as a DBA. You as a DBA have the ability to enforce your paper policies at the database level. Paper polices are great for defining database standards are guidelines. However, it takes time and effort to enforce these. To strictly enforce them, you have to go over your database with a fine-toothed comb. With Policy-Based Management, you can define your policies and rest assured that they will be enforced.

Next TimeToday I took a look at the basic ideas behind Policy-Based Management in SQL Server 2008. In my next article I’ll take a look at how you can make these ideas a reality by showing you how you can create your own polices to use to administer your SQL Server.

Defining SQL Server 2008 Policies

A new SQL Server 2008 feature that allows the Database Administrator the ability to define and enforce policies through the database engine. In today’s article I’ll look at how you can use SQL Server Management Studio to define your own policies.

Define your PoliciesThe most challenging part of creating an effective database policy system is deciding what exactly it is your want to create policies for. SQL Server 2008 provides a large range of Facets (objects) for which conditions and policies can be defined for, so it will absolutely be worth the effort to take some time to map out what Policies you want to enforce.

To define a new Policy, open SQL Server Management Studio and navigate to the Management node in Object Explorer. Before I can define a Policy, I’ll first need to define a new Condition and can easily do so by right-clicking on the Conditions folder under the Policy Management folder.



A Condition is a set of criteria defined on a Facet. A Facet is really nothing more than a SQL Server object that you can involve in a Policy. In the Create New Condition screen, I define a new Condition named NewStoredProcedureNames. I can define the criteria for my new Condition in the Expressions section. Each Facet (Stored Procedure in this case) has a set of Fields for which condition expressions can be defined. For this particular Condition, I want to set criteria so that any new Stored Procedure name begins with usp_, and this is fairly straightforward to do through the editor.



Now that I have my Condition defined, I can create a new Policy.

Right click the Policy folder and select New Policy. In the Open Policy window, choose the NewProcedureNames check condition we just created. Choose the On change: prevent Evaluation Mode. This mode will evaluate the Policy when a new stored procedure is created, and if the procedure does not start with usp_, an error will be thrown and the new procedure will be disallowed. Be sure to click the Enabled box to enable the Policy.



To test my new Policy, I write a script to create a new stored procedure named GetCurrentDate that returns the current date. When I attempt to execute the script, I receive an error message letting me know that I have violated a Policy. For a friendlier message, you can define informative descriptions with your Policies so that the user is given more instruction as to what condition was violated.




Here is the text of the procedure I attempted to create above.

CREATE PROCEDURE GetCurrentDateASSELECT CAST(GETDATE() AS DATE)ConclusionToday I defined a simple Policy to prevent the creation of any new stored procedure that does not begin with usp_. The great thing about Policy-Based management is how complex you can define your Policies to adhere to your defined database policies. The more you play around with defining policies, the more creative and effective you’ll become at defining your own polices, so take advantage as soon as you can!

See what process is using a TCP port in Windows Server 2008

You may find yourself frequently going to network tools to determine traffic patterns from one server to another; Windows Server 2008 (and earlier versions of Windows Server) can allow you to get that information locally on its connections. You can combine the netstat and tasklist commands to determine what process is using a port on the Windows Server.

The following command will show what network traffic is in use at the port level:

Netstat -a -n -oThe -o parameter will display the associated process identifier (PID) using the port. This command will produce an output similar to what is in Figure A.

Figure A



With the PIDs listed in the netstat output, you can follow up with the Windows Task Manager (taskmgr.exe) or run a script with a specific PID that is using a port from the previous step. You can then use the tasklist command with the specific PID that corresponds to a port in question. From the previous example, ports 5800 and 5900 are used by PID 1812, so using the tasklist command will show you the process using the ports. Figure B shows this query.

Figure B



This identifies VNC as the culprit to using the port. While a quick Google search on ports could possibly obtain the same result, this procedure can be extremely helpful when you’re trying to identify a viral process that may be running on the Windows Server.

Consider running the browser service on Windows Server 2008 DCs

Many Windows administrators, myself included, are trying to stop using NetBIOS and switch to DNS exclusively for name resolution. But under certain situations, a Windows Server 2008 domain controller may not display networks correctly when browsing the network.

For Windows Server 2008 installations, the computer browser is disabled by default, and dcpromo does not change the configuration of the service when Active Directory is installed. The network browsing is convenient for drive mappings and quick access to systems, and this browsing depends on the short name features of NetBIOS.

One way to correct these computer display issues is to configure the computer browser service to be an automatic starting service. There are a number of ways to do this, including the sc command. Figure A shows the sc command used to configure the service to be automatic and then immediately start the computer browser service.

Figure A



If you have this configuration for domain controllers running, the flexible single master operation (FSMO) role can prevent the browse-ready computers from being removed from display. However, this service has been set with a default state of Disable and should only be changed if your browse-ready list of computers is shrinking or is only a local subnet.

NetBIOS resolution is handy except for very large Active Directory networks. Larger networks are better use the Windows Server 2008 GlobalNames zone.

Tuesday, October 7, 2008

10 things you should know about launching an IT consultancy

Oh yeah. You’re going to work for yourself, be your own boss. Come and go when you want. No more kowtowing to The Man, right?

Running your own computer consulting business is rewarding, but it’s also full of numerous and competing challenges. Before you make the jump into entrepreneurship, take a moment to benefit from a few hundred hours of research I’ve invested and the real-world lessons I’ve learned in launching my own computer consulting franchise.

There are plenty of launch-your-own-business books out there. I know. I read several of them. Most are great resources. Many provide critical lessons in best managing liquid assets, understanding opportunity costs, and leveraging existing business relationships. But when it comes down to the dirty details, here are 10 things you really, really need to know (in street language) before quitting your day job.

#1: You need to incorporateYou don’t want to lose your house if a client’s data is lost. If you try hanging out a shingle as an independent lone ranger, your personal assets could be at risk. (Note that I’m not dispensing legal nor accounting advice. Consult your attorney for legal matters and a qualified accountant regarding tax issues.)

Ultimately, life is easier when your business operates as a business and not as a side project you maintain when you feel like it. Clients appreciate the assurance of working with a dedicated business. I can’t tell you how many clients I’ve obtained whose last IT guy “did it on the side” and has now taken a corporate job and doesn’t have time to help the client whose business has come to a standstill because of computer problems. Clients want to know you’re serious about providing service and that they’re not entering a new relationship in which they’re just going to get burned again in a few months time.

#2: You need to register for a federal tax ID numberNext, you need to register for a federal tax ID number. Hardly anyone (vendors, banks, and even some clients) will talk to you if you don’t.

Wait a second. Didn’t you just complete a mountain of paperwork to form your business (either as a corporation or LLC)? Yes, you did. But attorneys and online services charge incredible rates to obtain a federal tax ID for you.

Here’s a secret: It’s easy. Just go to the IRS Web site, complete and submit form SS-4 online, and voila. You’ll be the proud new owner of a federal tax ID.

#3: You need to register for a state sales tax exemptionYou need a state sales tax exemption, too (most likely). If you’re in a state that collects sales tax, you’re responsible for ensuring sales tax gets paid on any item you sell a client. In such states, whether you buy a PC for a customer or purchase antivirus licenses, taxes need to be paid.

Check your state’s Web site. Look for information on the state’s department of revenue. You’ll probably have to complete a form, possibly even have it notarized, and return it to the state’s revenue cabinet. Within a few weeks, you’ll receive an account number. You’ll use that account number when you purchase products from vendors. You can opt NOT to pay sales tax when you purchase the item, instead choosing to pay the sales tax when you sell the item to the client.

Why do it this way? Because many (most) consultants charge clients far more for a purchase than the consultant paid. Some call it markup; accountants prefer to view it as profit. But you certainly don’t want to have to try to determine what taxes still need to be paid if some tax was paid earlier. Thus, charge tax at the point of sale to the customer, not when you purchase the item.

#4: You need to register with local authoritiesLocal government wants its money, too. Depending on where your business is located and services customers, you’ll likely need to register for a business license. As with the state sales tax exemption, contact your local government’s revenue cabinet or revenue commission for more information on registering your business. Expect to pay a fee for the privilege.

#5: QuickBooks is your friendOnce your paperwork’s complete, it’s time for more paperwork. In fact, you’d better learn to love paperwork, as a business owner. There’s lots of it, whether it’s preparing quarterly tax filings, generating monthly invoicing, writing collection letters, or simply returning monthly sales reports to state and local revenue cabinets.

QuickBooks can simplify the process. From helping keep your service rates consistent (you’ll likely want one level for benchwork, another for residential or home office service, and yet a third for commercial accounts) to professionally invoicing customers, QuickBooks can manage much of your finances.

I recommend purchasing the latest Pro version, along with the corresponding Missing Manual book for the version you’ve bought. Plan on spending a couple of weekends, BEFORE you’ve launched your business, doing nothing but studying the financial software. Better yet, obtain assistance from an accountant or certified QuickBooks professional to set up your initial Chart of Accounts. A little extra time taken on the front end to ensure the software’s configured properly for your business will save you tons of time on the backend. I promise.

#6: Backend systems will make or break youSpeaking of backend, backend systems are a pain in the you-know-what. And by backend, I mean all your back office chores, from marketing services to billing to vendor management and fulfillment. Add call management to the list, too.

Just as when you’re stuck in traffic driving between service calls, you don’t make any money when you’re up to your elbows in paper or processing tasks. It’s frustrating. Clients want you to order a new server box, two desktops, and a new laptop. They don’t want to pay a markup, either. But they’re happy to pay you for your time to install the new equipment.

Sound good? It’s not.

Consider the facts. You have to form a relationship with the vendor. It will need your bank account information, maybe proof of insurance (expect to carry one million dollars of general liability), your state sales tax exemption ID, your federal employer ID, a list of references, and a host of other information that takes a day to collect. Granted, you have to do that only once (with each vendor, and you’ll need about 10), but then you still have to wade through their catalogs, select the models you need, and configure them with the appropriate tape arrays, software packages, etc. That takes an hour alone. And again, you’re typically not getting paid for this research. Even if you mark hardware sales up 15 percent, don’t plan on any Hawaiian vacation as a result.

Add in similar trials and tribulations with your marketing efforts, billing systems, vendor maintenance, channel resellers, management issues, etc., and you can see why many consultants keep a full-time office manager on staff. It’s no great revelation of my business strategy to say that’s why I went with a franchise group. I have a world of backend support ready and waiting when I need it. I can’t imagine negotiating favorable or competitive pricing with computer manufacturers, antivirus vendors, or Microsoft if I operated on my own.

Before you open your doors, make sure that you know how you’ll tackle these wide-ranging back office chores. You’ll be challenged with completing them on an almost daily basis.

#7: Vendor relationships will determine your successThis is one of those business facets I didn’t fully appreciate until I was operating on my own. Everyone wants you to sell their stuff, right? How hard can it be for the two of you to hook up?

Well, it’s hard, as it turns out, to obtain products configured exactly as your client needs quickly and at a competitive price if you don’t have strong vendor relationships. That means you’ll need to spend time at trade shows and on the telephone developing business relationships with everyone from software manufacturers and hardware distributors to local computer store owners who keep life-saving SATA disks and patch 5 cables in stock when you can’t wait five days for them to show up via UPS.

Different vendors have their own processes, so be prepared to learn myriad ways of signing up and jumping through hoops. Some have online registrations; others prefer faxes and notarized affidavits. Either way, they all take time to launch, so plan on beginning vendor discussions, and establishing your channel relationships, months in advance of opening your consultancy.

#8: You must know what you do (and explain it in 10 seconds or less)All the start-your-own-business books emphasize writing your 50-page business plan. Yes, I did that. And do you know how many times I’ve referred to it since I opened my business? Right; not once.

The written business plan is essential. Don’t get me wrong. It’s important because it gets you thinking about all those topics (target markets, capitalization, sales and marketing, cash flow requirements, etc.) you must master to be successful.

But here’s what you really need to include in your business plan: a succinct and articulate explanation of what your business does, how the services you provide help other businesses succeed, and how you’re different. Oh, and you need to be able to explain all that in 10 seconds or less.

Really. I’m not kidding.

Business Network International (plan on joining the chapter in your area) is on to something when it allots members just 30 seconds or so to explain what they do and the nature of their competitive advantage. Many times I’ve been approached in elevators, at stoplights (with the windows down), and just entering my car in a parking lot by prospective customers. Sometimes they have a quick question, other times they need IT help right now. Here’s the best part; they don’t always know it.

The ability to quickly communicate the value of the services you provide is paramount to success. Ensure that you can rattle off a sincere description of what you do and how you do it in 10 seconds and without having to think about it. It must be a natural reaction you develop to specific stimuli. You’ll cash more checks if you do.

#9: It’s all about the brandingWhy have I been approached by customers at stoplights, in parking lots, and in elevators? I believe in branding. And unlike many pop business books that broach the subject of branding but don’t leave you with any specifics, here’s what I mean by that.

People know what I do. Give me 10 seconds and I can fill in any knowledge gaps quickly. My “brand” does much of the ice breaking for me. I travel virtually nowhere without it. My company’s logo and telephone number are on shirts. Long sleeve, short sleeve, polos, and dress shirts; they all feature my logo. Both my cars are emblazoned with logos, telephone numbers, and simple marketing messages (which I keep consistent with my Yellow Pages and other advertising).

I have baseball hats for casual trips to Home Depot. My attaché features my company logo. My wife wears shirts displaying the company logo when grocery shopping. After I visit clients, even their PC bears a shiny silver sticker with my logo and telephone number.

Does it work? You better believe it. Hang out a shingle and a few people will call. Plaster a consistent but tasteful logo and simple message on your cars, clothing, ads, Web site, etc., and the calls begin stacking up.

Do you have to live, eat, and breathe the brand? No. But it helps. And let’s face it. After polishing off a burrito and a beer, I don’t mind someone asking if they can give me their laptop to repair when I approach my car in a parking lot. Just in case they have questions, I keep brochures, business cards and notepads (again, all featuring my logo and telephone number) in my glove box. You’d be surprised how quickly I go through them. I am.

#10: A niche is essentialThe business plan books touch on this, but they rarely focus on technology consultants directly. You need to know your market niche. I’m talking about your target market here.

Will you service only small businesses? If so, you better familiarize yourself with the software they use. Or are you targeting physicians? In that case, you better know all things HIPAA, Intergy, and Medisoft (among others).

Know up front that you’re not going to be able to master everything. I choose to manage most Windows server, desktop, and network issues. When I encounter issues with specific medical software, dental systems, or client relationship software platforms, I call in an expert trained on those platforms. We work alongside to iron out the issue together.

Over time, that strategy provides me with greater penetration into more markets than if I concentrated solely on mastering medical systems, for example. Plus, clients respect you when you tell them you’re outside your area of expertise. It builds trust, believe it or not.

Whatever you choose to focus on, ensure that you know your niche. Do all you can to research your target market thoroughly and understand the challenges such clients battle daily. Otherwise, you’ll go crazy trying to develop expertise with Medisoft databases at the same time Intel’s rolling out new dual-core chips and Microsoft’s releasing a drastically new version of Office.

10 fundamental differences between Linux and Windows

have been around the Linux community for more than 10 years now. From the very beginning, I have known that there are basic differences between Linux and Windows that will always set them apart. This is not, in the least, to say one is better than the other. It’s just to say that they are fundamentally different. Many people, looking from the view of one operating system or the other, don’t quite get the differences between these two powerhouses. So I decided it might serve the public well to list 10 of the primary differences between Linux and Windows.

#1: Full access vs. no accessHaving access to the source code is probably the single most significant difference between Linux and Windows. The fact that Linux belongs to the GNU Public License ensures that users (of all sorts) can access (and alter) the code to the very kernel that serves as the foundation of the Linux operating system. You want to peer at the Windows code? Good luck. Unless you are a member of a very select (and elite, to many) group, you will never lay eyes on code making up the Windows operating system.

You can look at this from both sides of the fence. Some say giving the public access to the code opens the operating system (and the software that runs on top of it) to malicious developers who will take advantage of any weakness they find. Others say that having full access to the code helps bring about faster improvements and bug fixes to keep those malicious developers from being able to bring the system down. I have, on occasion, dipped into the code of one Linux application or another, and when all was said and done, was happy with the results. Could I have done that with a closed-source Windows application? No.

#2: Licensing freedom vs. licensing restrictionsAlong with access comes the difference between the licenses. I’m sure that every IT professional could go on and on about licensing of PC software. But let’s just look at the key aspect of the licenses (without getting into legalese). With a Linux GPL-licensed operating system, you are free to modify that software and use and even republish or sell it (so long as you make the code available). Also, with the GPL, you can download a single copy of a Linux distribution (or application) and install it on as many machines as you like. With the Microsoft license, you can do none of the above. You are bound to the number of licenses you purchase, so if you purchase 10 licenses, you can legally install that operating system (or application) on only 10 machines.

#3: Online peer support vs. paid help-desk supportThis is one issue where most companies turn their backs on Linux. But it’s really not necessary. With Linux, you have the support of a huge community via forums, online search, and plenty of dedicated Web sites. And of course, if you feel the need, you can purchase support contracts from some of the bigger Linux companies (Red Hat and Novell for instance).

However, when you use the peer support inherent in Linux, you do fall prey to time. You could have an issue with something, send out e-mail to a mailing list or post on a forum, and within 10 minutes be flooded with suggestions. Or these suggestions could take hours of days to come in. It seems all up to chance sometimes. Still, generally speaking, most problems with Linux have been encountered and documented. So chances are good you’ll find your solution fairly quickly.

On the other side of the coin is support for Windows. Yes, you can go the same route with Microsoft and depend upon your peers for solutions. There are just as many help sites/lists/forums for Windows as there are for Linux. And you can purchase support from Microsoft itself. Most corporate higher-ups easily fall victim to the safety net that having a support contract brings. But most higher-ups haven’t had to depend up on said support contract. Of the various people I know who have used either a Linux paid support contract or a Microsoft paid support contract, I can’t say one was more pleased than the other. This of course begs the question “Why do so many say that Microsoft support is superior to Linux paid support?”

#4: Full vs. partial hardware supportOne issue that is slowly becoming nonexistent is hardware support. Years ago, if you wanted to install Linux on a machine you had to make sure you hand-picked each piece of hardware or your installation would not work 100 percent. I can remember, back in 1997-ish, trying to figure out why I couldn’t get Caldera Linux or Red Hat Linux to see my modem. After much looking around, I found I was the proud owner of a Winmodem. So I had to go out and purchase a US Robotics external modem because that was the one modem I knew would work. This is not so much the case now. You can grab a PC (or laptop) and most likely get one or more Linux distributions to install and work nearly 100 percent. But there are still some exceptions. For instance, hibernate/suspend remains a problem with many laptops, although it has come a long way.

With Windows, you know that most every piece of hardware will work with the operating system. Of course, there are times (and I have experienced this over and over) when you will wind up spending much of the day searching for the correct drivers for that piece of hardware you no longer have the install disk for. But you can go out and buy that 10-cent Ethernet card and know it’ll work on your machine (so long as you have, or can find, the drivers). You also can rest assured that when you purchase that insanely powerful graphics card, you will probably be able to take full advantage of its power.

#5: Command line vs. no command lineNo matter how far the Linux operating system has come and how amazing the desktop environment becomes, the command line will always be an invaluable tool for administration purposes. Nothing will ever replace my favorite text-based editor, ssh, and any given command-line tool. I can’t imagine administering a Linux machine without the command line. But for the end user — not so much. You could use a Linux machine for years and never touch the command line. Same with Windows. You can still use the command line with Windows, but not nearly to the extent as with Linux. And Microsoft tends to obfuscate the command prompt from users. Without going to Run and entering cmd (or command, or whichever it is these days), the user won’t even know the command-line tool exists. And if a user does get the Windows command line up and running, how useful is it really?

#6: Centralized vs. noncentralized application installationThe heading for this point might have thrown you for a loop. But let’s think about this for a second. With Linux you have (with nearly every distribution) a centralized location where you can search for, add, or remove software. I’m talking about package management systems, such as Synaptic. With Synaptic, you can open up one tool, search for an application (or group of applications), and install that application without having to do any Web searching (or purchasing).

Windows has nothing like this. With Windows, you must know where to find the software you want to install, download the software (or put the CD into your machine), and run setup.exe or install.exe with a simple double-click. For many years, it was thought that installing applications on Windows was far easier than on Linux. And for many years, that thought was right on target. Not so much now. Installation under Linux is simple, painless, and centralized.

#7: Flexibility vs. rigidityI always compare Linux (especially the desktop) and Windows to a room where the floor and ceiling are either movable or not. With Linux, you have a room where the floor and ceiling can be raised or lowered, at will, as high or low as you want to make them. With Windows, that floor and ceiling are immovable. You can’t go further than Microsoft has deemed it necessary to go.

Take, for instance, the desktop. Unless you are willing to pay for and install a third-party application that can alter the desktop appearance, with Windows you are stuck with what Microsoft has declared is the ideal desktop for you. With Linux, you can pretty much make your desktop look and feel exactly how you want/need. You can have as much or as little on your desktop as you want. From simple flat Fluxbox to a full-blown 3D Compiz experience, the Linux desktop is as flexible an environment as there is on a computer.

#8: Fanboys vs. corporate typesI wanted to add this because even though Linux has reached well beyond its school-project roots, Linux users tend to be soapbox-dwelling fanatics who are quick to spout off about why you should be choosing Linux over Windows. I am guilty of this on a daily basis (I try hard to recruit new fanboys/girls), and it’s a badge I wear proudly. Of course, this is seen as less than professional by some. After all, why would something worthy of a corporate environment have or need cheerleaders? Shouldn’t the software sell itself? Because of the open source nature of Linux, it has to make do without the help of the marketing budgets and deep pockets of Microsoft. With that comes the need for fans to help spread the word. And word of mouth is the best friend of Linux.

Some see the fanaticism as the same college-level hoorah that keeps Linux in the basements for LUG meetings and science projects. But I beg to differ. Another company, thanks to the phenomenon of a simple music player and phone, has fallen into the same fanboy fanaticism, and yet that company’s image has not been besmirched because of that fanaticism. Windows does not have these same fans. Instead, Windows has a league of paper-certified administrators who believe the hype when they hear the misrepresented market share numbers reassuring them they will be employable until the end of time.

#9: Automated vs. nonautomated removable mediaI remember the days of old when you had to mount your floppy to use it and unmount it to remove it. Well, those times are drawing to a close — but not completely. One issue that plagues new Linux users is how removable media is used. The idea of having to manually “mount” a CD drive to access the contents of a CD is completely foreign to new users. There is a reason this is the way it is. Because Linux has always been a multiuser platform, it was thought that forcing a user to mount a media to use it would keep the user’s files from being overwritten by another user. Think about it: On a multiuser system, if everyone had instant access to a disk that had been inserted, what would stop them from deleting or overwriting a file you had just added to the media? Things have now evolved to the point where Linux subsystems are set up so that you can use a removable device in the same way you use them in Windows. But it’s not the norm. And besides, who doesn’t want to manually edit the /etc/fstab fle?

#10: Multilayered run levels vs. a single-layered run levelI couldn’t figure out how best to title this point, so I went with a description. What I’m talking about is Linux’ inherent ability to stop at different run levels. With this, you can work from either the command line (run level 3) or the GUI (run level 5). This can really save your socks when X Windows is fubared and you need to figure out the problem. You can do this by booting into run level 3, logging in as root, and finding/fixing the problem.

With Windows, you’re lucky to get to a command line via safe mode — and then you may or may not have the tools you need to fix the problem. In Linux, even in run level 3, you can still get and install a tool to help you out (hello apt-get install APPLICATION via the command line). Having different run levels is helpful in another way. Say the machine in question is a Web or mail server. You want to give it all the memory you have, so you don’t want the machine to boot into run level 5. However, there are times when you do want the GUI for administrative purposes (even though you can fully administer a Linux server from the command line). Because you can run the startx command from the command line at run level 3, you can still start up X Windows and have your GUI as well. With Windows, you are stuck at the Graphical run level unless you hit a serious problem.

Your call…Those are 10 fundamental differences between Linux and Windows. You can decide for yourself whether you think those differences give the advantage to one operating system or the other. Me? Well I think my reputation (and opinion) precedes me, so I probably don’t need to say I feel strongly that the advantage leans toward Linux.

10 tips for implementing green IT

Going green” is the hot new trend in the business world, and that naturally filters down to the IT department. Implemented correctly, eco-friendly tactics can make your operations more efficient and save you money.

The goals of green IT include minimizing the use of hazardous materials, maximizing energy efficiency, and encouraging recycling and/or use of biodegradable products — without negatively affecting productivity. In this article, we’ll look at 10 ways to implement green IT practices in your organization.

#1: Buy energy efficient hardwareNew offerings from major hardware vendors include notebooks, workstations, and servers that meet the EPA’s Energy Star guidelines for lower power consumption. Look for systems that have good EPEAT ratings (www.epeat.net). The ratings use standards set by the IEEE to measure “environmental performance.” All EPEAT-registered products must meet Energy Star 4.0 criteria.

Multicore processors increase processing output without substantially increasing energy usage. Also look for high efficiency (80%) power supplies, variable speed temperature controlled fans, small form factor hard drives, and low voltage processors.

#2: Use power management technology and best practicesModern operating systems running on Advanced Configuration and Power Interface (ACPI)-enabled systems incorporate power-saving features that allow you to configure monitors and hard disks to power down after a specified period of inactivity. Systems can be set to hibernate when not in use, thus powering down the CPU and RAM as well.

Hardware vendors have their own power management software, which they load on their systems or offer as options. For example, HP’s Power Manager provides real-time reporting that shows how the settings you have configured affect the energy used by the computer.

There are also many third-party power management products that can provide further flexibility and control over computers’ energy consumption. Some programs make it possible to manually reduce the power voltage to the CPU. Others can handle it automatically on systems with Intel SpeedStep or AMD Cool’n'Quiet technologies.

Other technologies, such as Intel’s vPro, allow you to turn computers on and off remotely, thus saving energy because you don’t have to leave systems on if you want, for example, to schedule a patch deployment at 2:00 A.M.

#3: Use virtualization technology to consolidate serversYou can reduce the number of physical servers, and thus the energy consumption, by using virtualization technology to run multiple virtual machines on a single physical server. Because many servers are severely underutilized (in many cases, in use only 10 to 15 percent of the time they’re running), the savings can be dramatic. VMWare claims that its virtualized infrastructure can decrease energy costs by as much as 80 percent.

The same type of benefits can be realized with Microsoft’s Hyper-V virtualization technology, which is an integrated operating system feature of Windows Server 2008.

#4: Consolidate storage with SAN/NAS solutionsJust as server consolidation saves energy, so does consolidation of storage using storage area networks and network attached storage solutions. The Storage Networking Industry Association (SNIA) proposes such practices as powering down selected drives, using slower drives where possible, and not overbuilding power/cooling equipment based on peak power requirements shown in label ratings.

#5: Optimize data center designData centers are huge consumers of energy, and cooling all the equipment is a big issue. Data center design that incorporates hot aisle and cold aisle layout, coupled cooling (placing cooling systems closer to heat sources), and liquid cooling can tremendously reduce the energy needed to run the data center.

Another way to “green” the data center is to use low-powered blade servers and more energy-efficient uninterruptible power supplies, which can use 70 percent less power than a legacy UPS.

Optimum data center design for saving energy should also take into account the big picture, by considering the use of alternative energy technologies (photovoltaics, evaporative cooling, etc.) and catalytic converters on backup generators, and from the ground up, by minimizing the footprints of the buildings themselves. Energy-monitoring systems provide the information you need to measure efficiency. This Microsoft TechNet article discusses various ways to build a green data center.

#6: Use thin clients to reduce GPU power usageAnother way to reduce the amount of energy consumed by computers is to deploy thin clients. Because most of the processing is done on the server, the thin clients use very little energy. In fact, a typical thin client uses less power while up and running applications than an Energy Star compliant PC uses in sleep mode. Thin clients are also ecologically friendly because they generate less e-waste. There’s no hard drive, less memory, and fewer components to be dealt with at the end of their lifecycles.

Last year, a Verizon spokesman said the company had decreased energy consumption by 30 percent by replacing PCs with thin clients, saving about $1 million per year.

#7: Use more efficient displaysIf you have old CRT monitors still in use, replacing them with LCD displays can save up to 70 percent in energy costs. However, not all LCD monitors are created equal when it comes to power consumption. High efficiency LCDs are available from several vendors.

LG recently released what it claims is the world’s most energy efficient LCD monitor, the Flatron W2252TE. Tests have shown that it uses less than half the power of conventional 22-inch monitors.

#8: Recycle systems and suppliesTo reduce the load on already overtaxed landfills and to avoid sending hazardous materials to those landfills (where they can leach into the environment and cause harm), old systems and supplies can be reused, repurposed, and/or recycled. You can start by repurposing items within the company; for example, in many cases, when a graphics designer or engineer needs a new high end workstation to run resource-hungry programs, the old computer is perfectly adequate for use by someone doing word processing, spreadsheets, or other less intensive tasks. This hand-me-down method allows two workers to get better systems than they had, while requiring the purchase of only one new machine (thus saving money and avoiding unnecessary e-waste).

Old electronics devices can also be reused by those outside the company. You can donate old computers and other devices still in working order to schools and nonprofit organizations, which can still get a lot of use out of them. Finally, much electronic waste can be recycled, the parts used to make new items. Things like old printer cartridges, old cell phones, and paper can all be recycled. Some computer vendors, such as Dell, have programs to take back computers and peripherals for recycling.

#9: Reduce paper consumptionAnother way to save money while reducing your company’s impact on the environment is to reduce your consumption of paper. You can do this by switching from a paper-based to an electronic workflow: creating, editing, viewing, and delivering documents in digital rather than printed form. Send documents as e-mail attachments rather than faxing.

And when printing is unavoidable, you can still reduce waste and save money by setting your printers to use duplex (double-sided) printing. An internal study conducted by HP showed that a Fortune 500 company can save 800 tons of paper per year (a savings of over $7 million) by printing on both sides.

#10: Encourage telecommutingThe ultimate way to have a greener office to have less office. By encouraging as many workers as possible to telecommute, you can reduce the amount of office space that needs to be heated and cooled, the number of computers required on site, and the number of miles driven by employees to get to and from work. Telecommuting reduces costs for both employers and employees and can also reduce the spread of contagious diseases.

10 surprising things about Windows Server 2008

Windows Server 2003 felt like a refresh of Windows Server 2000. There were few radical changes, and most of the improvements were fairly under the surface. Windows Server 2008, on the other hand, is a full-size helping of “new and improved.” While the overall package is quite good, there are a few surprises, “gotchas,” and hidden delights you will want to know about before deciding if you will be moving to Windows Server 2008 any time soon.

#1: The 64-bit revolution is not completeThere have been 64-bit editions of Windows Server for years now, and Microsoft has made it quite clear that it wants all of its customers to move to 64-bit operating systems. That does not mean that you can throw away your 32-bit Windows Server 2008 CD, though! Over the last few months, I have been shocked on more than one occasion by the pieces of Microsoft software that not only do not have 64-bit versions, but will not run under a 64-bit OS at all. This list includes Team Foundation Server and ISA Server. If you are planning on moving to 64-bit Windows Server 2008, be prepared to have a 32-bit server or two around, whether it be on physical hardware or in a VM.

#2: Who moved my cheese?While the UI changes in Windows Server 2008 are not nearly as sweeping as the Aero interface in Vista, it has undergone a dramatic rearrangement and renaming of the various applets around the system. In retrospect, the organization of these items is much more sensible, but that hardly matters when you have years of experience going to a particular area to find something, only to have it suddenly change. Expect to be a bit frustrated in the Control Panel until you get used to it.

#3: Windows Workstation 2008 might catch onIn an odd turn of events, Microsoft has provided the ability to bring the “Vista Desktop Experience” into Windows Server 2008. I doubt that many server administrators were asking for this, but the unusual result is that a number of people are modifying Windows Server 2008 to be as close to a desktop OS as possible. There have always been a few people who use the server edition of Windows as a desktop, but this makes it much easier and friendlier. These home-brewed efforts are generally called “Windows Workstation 2008,” in case you’re interested in trying it out on your own.

#4: Hyper-V is good, but…Hyper-V was one of the most anticipated features of Windows Server 2008, and it’s surprisingly good, particularly for a version 1 release from Microsoft. It is stable, easy to install and configure, and does not seem to have any major problems. For those of us who have been beaten into the “wait until the third version” or “don’t install until SP1″ mentality, this is a refreshing surprise.

#5: …Hyper-V is limitedHyper-V, while of high quality, is sorely lacking features. Considering that it was billed as a real alternative to VMWare and other existing solutions, it is a disappointment (to say the least) that it does not seem to include any utilities for importing VMs from products other than Virtual PC and Virtual Server. Even those imports are not workaround-free. Another real surprise here is the lack of a physical-to-virtual conversion utility. Hyper-V may be a good system, but make sure that you fully try it out before you commit to using it.

#6: NT 4 domain migration — it’s not happeningIf you have been putting off the painful migration from your NT 4 domain until Windows Server 2008 was released, don’t keep waiting. The older version (3.0) Active Directory Migration Tool (ADMT) supports migrations from NT 4, but not to Windows Server 2008. The latest version (3.1) support migrations to Windows Server 2008, but not from NT 4. Either migrate from NT 4 before changing your domain to be a Windows 2008 domain or get your NT 4 domain upgraded first.

#7: The ashtrays are now optionalIn prior versions of Windows Server, a lot of applications came installed by default. No one ever uninstalled them because they did not cause any harm, even if you didn’t use them or installed an alternative. Now, even the “throwaway” applications, like Windows Backup, are not installed by default. After installation, you need to add “features” to get the full Windows Server suite of applications. This can be frustrating if you are in a hurry, but the reduced clutter and resource overhead are worth it.

#8: Licensing is bewilderingContinuing a hallowed Microsoft tradition, trying to understand the licensing terms of Windows Server 2008 feels like hammering nails with your forehead. So maybe this isn’t so much a surprise as a gotcha. The Standard Edition makes sense, but when you get into the issues around virtualization in Enterprise and Datacenter Editions, things can be a bit confusing. Depending upon your need for virtual machines and the number of physical CPUs (not CPU cores, thankfully) in your server, Enterprise Edition may be cheaper — or it may be more expensive than Datacenter Edition. One thing to keep in mind is that once you start using virtual machines, you start to like them a lot more that you thought you would. It’s easy to find yourself using a lot more of them than originally expected.

#9: There’s no bloatMaybe it’s because Vista set expectations of pain, or because hardware has gotten so much cheaper, but Windows Server 2008 does not feel bloated or slow at all. Microsoft has done a pretty good job at minimizing the installed feature set to the bare minimum, and Server Core can take that even further. Depending upon your needs, it can be quite possible to upgrade even older equipment to Windows Server 2008 without needing to beef up the hardware.

#10: Quality beats expectationsMicrosoft customers have developed low expectations of quality over the years, unfortunately, with good reason. While its track record for initial releases, in terms of security holes and bug counts, seems to be improving customers are still howling about Vista. As a result, it has come as a real surprise that the overall reaction to Windows Server 2008 has been muted, to say the least. The horror stories just are not flying around like they were with Vista. Maybe it’s the extra year they spent working on it, or different expectations of the people who work with servers, but Windows Server 2008 has had a pretty warm reception so far. And that speaks a lot to its quality. There is nothing particularly flashy or standout about it. But at the same time, it is a solid, high quality product. And that is exactly what system administrators need.

10 ways to get maximum value from a professional development class

From time to time you will find yourself taking a professional development class. It could cover communications, conflict management, business writing, or some other area. It might be a class that’s internal to your company, or it might be a class you attend outside, with people from other companies. In any case, your company (or you personally) made a substantial investment in this training. Here are pointers for management — and for you — to ensure both of you gain maximum value from the class.

#1: Management should attend
I wish I had a dollar for every time, during a session I teach, a non-management attendee said to me, “Calvin, your material is great, but you need to be saying this to our bosses.” On the other hand, lest I become too vain, maybe there are others who said to themselves, “This was a waste of time, so our managers should suffer as well.”

In either case, management increases its credibility among staff by attending the same training. Unless it does so, the chances are great the management may undercut the philosophy that the class is attempting to impart.

By the way, if you hold to the “waste of time” view, please see point 5 below.

#2: Separate managers from subordinates
It’s generally inadvisable to have managers in the same entire class with direct subordinates. The presence of the former could inhibit the latter from speaking up, particularly when organizational issues and policies are being discussed.

Two alternatives address this concern. First, management can attend its own separate session. Second, management can attend the same session as direct subordinates, but 30 to 45 minutes from the end, can be excused. At that point, staff attendees who have issues can raise them. In other words, that’s the time attendees can start saying, “Calvin, you’re right in what you’re saying, but that won’t work here because…”

#3: Management must respect class time
If management is sending staff to training, it has to respect that time. The “tap on the shoulder” to handle an issue that takes “just a second” of course never takes that long. It ends up taking that attendee out of class completely. When that happens, it defeats the purpose of having that person attend class. Management needs to respect the time that the attendee is in class.

#4: Distribute attendance among many departments
Given the choice of having many attendees from one (or only a few departments) vs. having only a few attendees from many departments, I choose the latter. From a practical standpoint, this strategy reduces the burden on those who aren’t attending class but still must support business operations. From an organizational standpoint, the latter approach can help build morale by giving an attendee exposure to other departments and department workers.

#5: Recognize the value of the training
From time to time, when I talk about skills in communicating with customers, I see people with rolling eyes and folded arms. No doubt they’re saying to themselves, “Why am I wasting my time here? I could be writing a program / configuring a router / completing a problem ticket.”

That’s why I often open with a quiz: what do Operating System/2, Betamax, and the Dvorak keyboard all have in common? Answer: They were technically superior to their competition but nonetheless became obsolete. In the same way, technical people who rely only on their technical skills for career success could be in for a shock, because skill in working with others is at least as important, if not more so.

Try to keep an open mind. Will some training turn out to be a “bomb”? I hope not, but even in that case, you can still benefit. Sit down and analyze why you thought the session failed. Then, before your next session, resolve to discuss those concerns with the instructor if you can.

#6: Make sure your job is covered during your absence
You can do your part to avoid getting the aforementioned tap on the shoulder by the boss. Make sure your co-workers and customers are aware of your absence. Adjust your voicemail greeting and set an e-mail or instant message autorespond, if you can. Make sure they know of any open items or issues and how they should be handled.

#7: Have specific personal objectives
Your time in class will be far more meaningful if you set personal objectives for yourself beforehand. Read up on any class descriptions and syllabi or topic list. Then, go over mentally the areas where you believe you most need improvement. When you set your objectives, make sure they are measurable — and more important, that they’re realistic.

#8: Speak up
The biggest shock to many would-be law students is the total irrelevance of class participation in one’s final grade. Nonetheless, I still remember Professor Woodward’s advice in contracts class. He said that we still should speak in class, because doing so forces us to master the material. In other words, we may think we know the material, but having to articulate it is the acid test.

You probably won’t get a grade for your professional development class. However, you probably will pick up the concepts more quickly, and retain them better, if you speak up.

#9: Apply exercises and activities to your job
Those exercises where you walk the maze, build the toothpick tower, or sequence the 15 items to help you survive the desert aren’t there just for the heck of it. They’re there because they deal with some skill that’s important to your job. The instructor or facilitator, in discussing the exercise afterward, should be making that association. If not, make it yourself. Write a note to yourself about the lessons you learned from the exercise. In particular, ask yourself how these lessons apply to your job and how you might act differently having gained the insights you did.

#10: Write a letter to yourself
At the end of sessions I lead, I ask attendees to write a letter to themselves about what they learned. I then take those letters and simply hold them for about three months, after which I return them to their respective authors. I do so because many attendees remember clearly the material immediately after class. However, in the weeks that follow, their memories may dim. Seeing the letter refreshes their memory and reinforces the class session.

If the leader of your session doesn’t follow this practice, consider doing it on your own. Write a letter, seal it, and just put it somewhere that it won’t get lost. Maybe write a note on the outside, such as, “Open on [date three months from now].”

10 reasons why you should use the Opera browser

I have gone through many browsers in my lifetime of IT. From Lynx to Mosaic to Mozilla to Netscape to Firefox to Internet Explorer to Safari to Flock. But there’s another browser that peeks its head in and out of that cycle — Opera. Opera is a browser that gets little press in the battle for Internet supremacy. But it’s a browser that is making huge waves in other arenas (Can you say “mobile”?) and is always a steady player in the browser market.

But why would you want to use a browser that gets little love in the market? I will give you 10 good reasons.

#1: SpeedIt seems no matter how many leaps and bounds Firefox and Internet Explorer make, Opera is always able to render pages faster. In both cold and warm starts, Opera beats both Firefox and Internet explorer. We’re not talking about a difference the naked eye is incapable of seeing. The speed difference is actually noticeable. So if you are a speed junky, and most of you are, you should be using Opera for this reason alone.

#2: Speed DialSpeed Dial is one of those features that generally steals the show with browsers. It’s basically a set of visual bookmarks on one page. To add a page to Speed Dial, you simply click on an empty slot in the Speed Dial page and enter the information.When you have a full page of Speed Dial bookmarks, you can quickly go to the page you want by clicking the related image. For even faster browsing, you can click the Ctrl + * key combination (Where * is the number 1-9 associated with your page as assigned in Speed Dial).

#3: WidgetsOpera Widgets are like Firefox extensions on steroids. Widgets are what the evolution of the Web is all about — little Web-based applications you can run from inside (or, in some cases, outside) your browser. Some of the widgets are useful (such as the Touch The Sky international weather applet) and some are just fun (such as the Sim Aquarium.) They are just as easy to install as Firefox extensions.

#4: WandSave form information and/or passwords with this handy tool. Every time you fill out a form or a password, the Wand will ask you if you want to save the information. When you save information (say a form), a yellow border will appear around the form. The next time you need to fill out that form, click on the Wand button or click Ctrl + Enter, and the information will automatically be filled out for you.

#5: NotesHave you ever been browsing and wanted to take notes on a page or site (or about something totally unrelated to your Web browsing)? Opera comes complete with a small Notes application that allows you to jot down whatever you need to jot down. To access Note, click on the Tools menu and then click on Notes. The tool itself is incredibly simple to use and equally as handy.

#6: BitTorrentYes it is true, Opera has a built-in BitTorrent protocol. And the built-in BitTorrent client is simple to use: Click on a Torrent link, and a dialog will open asking you where you want to download the file. The Torrent client is enabled by default, so if your company doesn’t allow Torrenting, you should probably disable this feature. Note: When downloading Torrents, you will continue to share content until you either stop the download or close the browser.

#7: Display modesAnother unique-to-Opera feature is its display modes, which allows you to quickly switch between Fit To Width and Full Screen mode. Fit To Width mode adjusts the page size to the available screen space while using flexible reformatting. Full Screen mode gives over the entire screen space to browsing. In this mode, you drop all menus and toolbars, leaving only context menus, mouse gestures, and keyboard shortcuts. The latter mode is especially good for smaller screens.

#8: Quick PreferencesThe Quick Preferences menu is one of those features the power user will really appreciate. I am quite often using it to enable/disable various features, and not having to open up the Preferences window makes for a much quicker experience. From this menu, you can alter preferences for pop-ups, images, Java/JavaScript, plug-ins, cookies, and proxies. This is perfect when you are one of those users who block cookies all the time, until a site comes along where you want to enable cookies.

#9: Mouse GesturesThis feature tends to bother most keyboard junkies (those who can’t stand to move their fingers from the keyboard.) But Mouse Gestures is a built-in feature that applies certain actions to specific mouse movements (or actions). For example, you can go back a page by holding down the right mouse button and clicking the left mouse button. This is pretty handy on a laptop, where using the track pad can take more time than you probably want to spend on navigation. But even for those who prefer to keep their hands on the keys and not the mouse, the feature can still save time. Instead of having to get to the mouse, move the mouse to the toolbar, and click a button, you simply have to get your hands to the mouse and make the gesture for the action to take place. Of course, this does require the memorization of the gestures.

#10: Session savingI love this feature. All too many times, I have needed to close a browser window but didn’t want to lose a page. To keep from losing the page, I would keep a temporary bookmark file where I could house these bookmarks. But with Opera, that’s history. If you have a page (or number of pages) you want to save, you just go to the File menu and then the Sessions submenu and click Save This Session. The next time you open Opera, the same tabs will open. You can also manage your saved sessions so that you can save multiple sessions and delete selected sessions.

The upshotWith just the above list, you can see how easily Opera separates itself from the rest of the crowd. It’s a different beast in the Web browsing space. It’s fast, stable, and cross platform, and it contains many features other browsers can’t touch.

10 things Linux does better than Windows

Throughout my 10+ years of using Linux, I have heard about everything that Windows does better than Linux. So I thought it time to shoot back and remind everyone of what Linux does better than Windows. Of course, being the zealot that I am, I could list far more than 10 items. But I will stick with the theme and list only what I deem to be the 10 areas where Linux not only does better than Windows but blows it out of the water.

#1: TCOThis can o’ worms has been, and will be, debated until both operating systems are no more. But let’s face it — the cost of a per-seat Windows license for a large company far outweighs having to bank on IT learning Linux. This is so for a couple of reasons.

First, most IT pros already know a thing or two about Linux. Second, today’s Linux is not your mother’s Linux. Linux has come a long, long way from where it was when I first started. Ten years ago, I would have said, hands down, Windows wins the TCO battle. But that was before KDE and GNOME brought their desktops to the point where any given group of monkeys could type Hamlet on a Linux box as quickly as they could type it on a Windows box. I bet any IT department could roll out Linux and do it in such a way that the end users would hardly know the difference. With KDE 4.1 leaps and bounds beyond 4.0, it’s already apparent where the Linux desktop is going — straight into the end users’ hands. So with all the FUD and rhetoric aside, Windows can’t compete with Linux in TCO. Add to that the cost of software prices (including antivirus and spyware protection) for Windows vs. Linux, and your IT budget just fell deeply into the red.

#2: DesktopYou can’t keep a straight face and say the Linux desktop is more difficult to use than the Windows desktop. If you can, you might want to check the release number of the Linux distribution you are using. Both GNOME and KDE have outpaced Windows for user-friendliness. Even KDE 4, which has altered the path of KDE quite a bit, will make any given user at home with the interface. But the Linux desktop beats the Windows desktop for more reasons than just user-friendliness. It’s far more flexible than anything Microsoft has ever released. If you don’t like the way the Linux desktop looks or behaves, change it. If you don’t like the desktop included with your distribution, add another. And what if, on rare occasion, the desktop locks up? Well, Windows might require a hard restart. Linux? Hit Ctrl + Alt + Backspace to force a logout of X Windows. Or you can always drop into a virtual console and kill the application that caused your desktop to freeze. It’s all about flexibility… something the Windows desktop does not enjoy.

#3: ServerFor anyone who thinks Windows has the server market cornered, I would ask you to wake up and join the 21st century. Linux can, and does, serve up anything and everything and does it easily and well. It’s fast, secure, easy to configure, and very scalable. And let’s say you don’t happen to be fond of Sendmail. If that’s the case you have plenty of alternatives to choose from. Even with serving up Web pages. There are plenty of alternatives to Apache, some of which are incredibly lightweight.

#4: SecurityRecently, there was a scare in the IT world known as Phalanx 2. It actually hit Linux. But the real issue was that it hit Linux servers that hadn’t been updated. It was poor administration that caused this little gem to get noticed. The patch, as usual in the Linux world, came nearly as soon as word got out. And that’s the rub. Security issues plague Windows for a couple of reasons: The operating system comes complete with plenty of security holes and Microsoft is slow to release patches for the holes. Of course, this is not to say that Linux is immune. It isn’t. But it is less susceptible to attacks and faster to fix problems.

#5: FlexibilityThis stems from the desktop but, because Linux is such an amazingly adaptable operating system, it’s wrong to confine flexibility to the desktop alone. Here’s the thing: With Linux, there is always more than one way to handle a task. Add to that the ability to get really creative with your problem solving, and you have the makings of a far superior system. Windows is about as inflexible as an operating system can be. Think about it this way: Out of the box, what can you do with Windows? You can surf the Web and get e-mail. Out of the box, what can you do with Linux? I think the better question is what can you NOT do with Linux? Linux is to Legos like Windows is to Lincoln Logs. With Lincoln Logs, you have the pieces to make fine log cabins. With Legos, you have the pieces to make, well, anything. And then you have all the fanboys making Star Wars Legos and Legos video games. Just where did all those Lincoln Logs fanboys go?

#6: Package managementReally, all I should have to say about this is that Windows does no package management. Sure, you can always install an application with a single click. But what if you don’t know which package you’re looking for? Where is the repository to search? Where are the various means of installing applications? Where are the dependency checks? Where are the md5 checks? What about not needing root access to install any application in Windows? Safety? Security? Sanity?

#7: CommunityAbout the only community for Windows is the flock of MCSEs, the denizens at the Microsoft campus, and the countless third-party software companies preying on those who can’t figure out what to do when Windows goes down for the count. Linux has always been and always will be about community. It was built by a community and for a community. And this Linux community is there to help those in need. From mailing lists to LUGs (Linux user groups) to forums to developers to Linus Torvalds himself (the creator of Linux), the Linux operating system is a community strong with users of all types, ages, nationalities, and social anxieties.

#8: InteroperabilityWindows plays REALLY well with Windows. Linux plays well with everyone. I’ve never met a system I couldn’t connect Linux to. That includes OS X, Windows, various Linux distributions, OS/2, Playstations… the list goes on and on. Without the help of third-party software, Windows isn’t nearly as interoperable. And we haven’t even touched on formats. With OpenOffice, you can open/save in nearly any format (regardless of release date). Have you come across that docx format yet? Had fun getting it to open in anything but MS Word >=2007?

#9: Command lineThis is another item where I shouldn’t have to say much more than the title. The Linux command line can do nearly anything you need to work in the Linux operating system. Yes, you need a bit of knowledge to do this, but the same holds true for the Windows command line. The biggest difference is the amount you can do when met with only the command line. If you had to administer two machines through the command line only (one Linux box and one Windows box), you would quickly understand just how superior the Linux CLI is to the vastly underpowered Windows CLI.

#10: EvolutionFor most users, Vista was a step backward. And that step backward took a long time (five years) to come to fruition. With most Linux distributions, new releases are made available every six months. And some of them are major jumps in technological advancement. Linux also listens to its community. What are they saying and what are they needing? From the kernel to the desktop, the Linux developer community is in sync with its users. Microsoft? Not so much. Microsoft takes its time to release what may or may not be an improvement. And, generally speaking, those Microsoft release dates are as far from set in stone as something can be. It should go without saying that Microsoft is not an agile developer. In fact, I would say Microsoft, in its arrogance, insists companies, users, and third-party developers evolve around it.

That’s my short list of big-ticket items that Linux does better than Windows. There will be those naysayers who feel differently, but I think most people will agree with these points. Of course, I am not so closed-minded as to think that there is nothing that Windows does better than Linux. I can think of a few off the top of my head: PR, marketing, FUD, games, crash, and USB scanners.