Oh yeah. You’re going to work for yourself, be your own boss. Come and go when you want. No more kowtowing to The Man, right?
Running your own computer consulting business is rewarding, but it’s also full of numerous and competing challenges. Before you make the jump into entrepreneurship, take a moment to benefit from a few hundred hours of research I’ve invested and the real-world lessons I’ve learned in launching my own computer consulting franchise.
There are plenty of launch-your-own-business books out there. I know. I read several of them. Most are great resources. Many provide critical lessons in best managing liquid assets, understanding opportunity costs, and leveraging existing business relationships. But when it comes down to the dirty details, here are 10 things you really, really need to know (in street language) before quitting your day job.
#1: You need to incorporateYou don’t want to lose your house if a client’s data is lost. If you try hanging out a shingle as an independent lone ranger, your personal assets could be at risk. (Note that I’m not dispensing legal nor accounting advice. Consult your attorney for legal matters and a qualified accountant regarding tax issues.)
Ultimately, life is easier when your business operates as a business and not as a side project you maintain when you feel like it. Clients appreciate the assurance of working with a dedicated business. I can’t tell you how many clients I’ve obtained whose last IT guy “did it on the side” and has now taken a corporate job and doesn’t have time to help the client whose business has come to a standstill because of computer problems. Clients want to know you’re serious about providing service and that they’re not entering a new relationship in which they’re just going to get burned again in a few months time.
#2: You need to register for a federal tax ID numberNext, you need to register for a federal tax ID number. Hardly anyone (vendors, banks, and even some clients) will talk to you if you don’t.
Wait a second. Didn’t you just complete a mountain of paperwork to form your business (either as a corporation or LLC)? Yes, you did. But attorneys and online services charge incredible rates to obtain a federal tax ID for you.
Here’s a secret: It’s easy. Just go to the IRS Web site, complete and submit form SS-4 online, and voila. You’ll be the proud new owner of a federal tax ID.
#3: You need to register for a state sales tax exemptionYou need a state sales tax exemption, too (most likely). If you’re in a state that collects sales tax, you’re responsible for ensuring sales tax gets paid on any item you sell a client. In such states, whether you buy a PC for a customer or purchase antivirus licenses, taxes need to be paid.
Check your state’s Web site. Look for information on the state’s department of revenue. You’ll probably have to complete a form, possibly even have it notarized, and return it to the state’s revenue cabinet. Within a few weeks, you’ll receive an account number. You’ll use that account number when you purchase products from vendors. You can opt NOT to pay sales tax when you purchase the item, instead choosing to pay the sales tax when you sell the item to the client.
Why do it this way? Because many (most) consultants charge clients far more for a purchase than the consultant paid. Some call it markup; accountants prefer to view it as profit. But you certainly don’t want to have to try to determine what taxes still need to be paid if some tax was paid earlier. Thus, charge tax at the point of sale to the customer, not when you purchase the item.
#4: You need to register with local authoritiesLocal government wants its money, too. Depending on where your business is located and services customers, you’ll likely need to register for a business license. As with the state sales tax exemption, contact your local government’s revenue cabinet or revenue commission for more information on registering your business. Expect to pay a fee for the privilege.
#5: QuickBooks is your friendOnce your paperwork’s complete, it’s time for more paperwork. In fact, you’d better learn to love paperwork, as a business owner. There’s lots of it, whether it’s preparing quarterly tax filings, generating monthly invoicing, writing collection letters, or simply returning monthly sales reports to state and local revenue cabinets.
QuickBooks can simplify the process. From helping keep your service rates consistent (you’ll likely want one level for benchwork, another for residential or home office service, and yet a third for commercial accounts) to professionally invoicing customers, QuickBooks can manage much of your finances.
I recommend purchasing the latest Pro version, along with the corresponding Missing Manual book for the version you’ve bought. Plan on spending a couple of weekends, BEFORE you’ve launched your business, doing nothing but studying the financial software. Better yet, obtain assistance from an accountant or certified QuickBooks professional to set up your initial Chart of Accounts. A little extra time taken on the front end to ensure the software’s configured properly for your business will save you tons of time on the backend. I promise.
#6: Backend systems will make or break youSpeaking of backend, backend systems are a pain in the you-know-what. And by backend, I mean all your back office chores, from marketing services to billing to vendor management and fulfillment. Add call management to the list, too.
Just as when you’re stuck in traffic driving between service calls, you don’t make any money when you’re up to your elbows in paper or processing tasks. It’s frustrating. Clients want you to order a new server box, two desktops, and a new laptop. They don’t want to pay a markup, either. But they’re happy to pay you for your time to install the new equipment.
Sound good? It’s not.
Consider the facts. You have to form a relationship with the vendor. It will need your bank account information, maybe proof of insurance (expect to carry one million dollars of general liability), your state sales tax exemption ID, your federal employer ID, a list of references, and a host of other information that takes a day to collect. Granted, you have to do that only once (with each vendor, and you’ll need about 10), but then you still have to wade through their catalogs, select the models you need, and configure them with the appropriate tape arrays, software packages, etc. That takes an hour alone. And again, you’re typically not getting paid for this research. Even if you mark hardware sales up 15 percent, don’t plan on any Hawaiian vacation as a result.
Add in similar trials and tribulations with your marketing efforts, billing systems, vendor maintenance, channel resellers, management issues, etc., and you can see why many consultants keep a full-time office manager on staff. It’s no great revelation of my business strategy to say that’s why I went with a franchise group. I have a world of backend support ready and waiting when I need it. I can’t imagine negotiating favorable or competitive pricing with computer manufacturers, antivirus vendors, or Microsoft if I operated on my own.
Before you open your doors, make sure that you know how you’ll tackle these wide-ranging back office chores. You’ll be challenged with completing them on an almost daily basis.
#7: Vendor relationships will determine your successThis is one of those business facets I didn’t fully appreciate until I was operating on my own. Everyone wants you to sell their stuff, right? How hard can it be for the two of you to hook up?
Well, it’s hard, as it turns out, to obtain products configured exactly as your client needs quickly and at a competitive price if you don’t have strong vendor relationships. That means you’ll need to spend time at trade shows and on the telephone developing business relationships with everyone from software manufacturers and hardware distributors to local computer store owners who keep life-saving SATA disks and patch 5 cables in stock when you can’t wait five days for them to show up via UPS.
Different vendors have their own processes, so be prepared to learn myriad ways of signing up and jumping through hoops. Some have online registrations; others prefer faxes and notarized affidavits. Either way, they all take time to launch, so plan on beginning vendor discussions, and establishing your channel relationships, months in advance of opening your consultancy.
#8: You must know what you do (and explain it in 10 seconds or less)All the start-your-own-business books emphasize writing your 50-page business plan. Yes, I did that. And do you know how many times I’ve referred to it since I opened my business? Right; not once.
The written business plan is essential. Don’t get me wrong. It’s important because it gets you thinking about all those topics (target markets, capitalization, sales and marketing, cash flow requirements, etc.) you must master to be successful.
But here’s what you really need to include in your business plan: a succinct and articulate explanation of what your business does, how the services you provide help other businesses succeed, and how you’re different. Oh, and you need to be able to explain all that in 10 seconds or less.
Really. I’m not kidding.
Business Network International (plan on joining the chapter in your area) is on to something when it allots members just 30 seconds or so to explain what they do and the nature of their competitive advantage. Many times I’ve been approached in elevators, at stoplights (with the windows down), and just entering my car in a parking lot by prospective customers. Sometimes they have a quick question, other times they need IT help right now. Here’s the best part; they don’t always know it.
The ability to quickly communicate the value of the services you provide is paramount to success. Ensure that you can rattle off a sincere description of what you do and how you do it in 10 seconds and without having to think about it. It must be a natural reaction you develop to specific stimuli. You’ll cash more checks if you do.
#9: It’s all about the brandingWhy have I been approached by customers at stoplights, in parking lots, and in elevators? I believe in branding. And unlike many pop business books that broach the subject of branding but don’t leave you with any specifics, here’s what I mean by that.
People know what I do. Give me 10 seconds and I can fill in any knowledge gaps quickly. My “brand” does much of the ice breaking for me. I travel virtually nowhere without it. My company’s logo and telephone number are on shirts. Long sleeve, short sleeve, polos, and dress shirts; they all feature my logo. Both my cars are emblazoned with logos, telephone numbers, and simple marketing messages (which I keep consistent with my Yellow Pages and other advertising).
I have baseball hats for casual trips to Home Depot. My attaché features my company logo. My wife wears shirts displaying the company logo when grocery shopping. After I visit clients, even their PC bears a shiny silver sticker with my logo and telephone number.
Does it work? You better believe it. Hang out a shingle and a few people will call. Plaster a consistent but tasteful logo and simple message on your cars, clothing, ads, Web site, etc., and the calls begin stacking up.
Do you have to live, eat, and breathe the brand? No. But it helps. And let’s face it. After polishing off a burrito and a beer, I don’t mind someone asking if they can give me their laptop to repair when I approach my car in a parking lot. Just in case they have questions, I keep brochures, business cards and notepads (again, all featuring my logo and telephone number) in my glove box. You’d be surprised how quickly I go through them. I am.
#10: A niche is essentialThe business plan books touch on this, but they rarely focus on technology consultants directly. You need to know your market niche. I’m talking about your target market here.
Will you service only small businesses? If so, you better familiarize yourself with the software they use. Or are you targeting physicians? In that case, you better know all things HIPAA, Intergy, and Medisoft (among others).
Know up front that you’re not going to be able to master everything. I choose to manage most Windows server, desktop, and network issues. When I encounter issues with specific medical software, dental systems, or client relationship software platforms, I call in an expert trained on those platforms. We work alongside to iron out the issue together.
Over time, that strategy provides me with greater penetration into more markets than if I concentrated solely on mastering medical systems, for example. Plus, clients respect you when you tell them you’re outside your area of expertise. It builds trust, believe it or not.
Whatever you choose to focus on, ensure that you know your niche. Do all you can to research your target market thoroughly and understand the challenges such clients battle daily. Otherwise, you’ll go crazy trying to develop expertise with Medisoft databases at the same time Intel’s rolling out new dual-core chips and Microsoft’s releasing a drastically new version of Office.
Tuesday, October 7, 2008
10 fundamental differences between Linux and Windows
have been around the Linux community for more than 10 years now. From the very beginning, I have known that there are basic differences between Linux and Windows that will always set them apart. This is not, in the least, to say one is better than the other. It’s just to say that they are fundamentally different. Many people, looking from the view of one operating system or the other, don’t quite get the differences between these two powerhouses. So I decided it might serve the public well to list 10 of the primary differences between Linux and Windows.
#1: Full access vs. no accessHaving access to the source code is probably the single most significant difference between Linux and Windows. The fact that Linux belongs to the GNU Public License ensures that users (of all sorts) can access (and alter) the code to the very kernel that serves as the foundation of the Linux operating system. You want to peer at the Windows code? Good luck. Unless you are a member of a very select (and elite, to many) group, you will never lay eyes on code making up the Windows operating system.
You can look at this from both sides of the fence. Some say giving the public access to the code opens the operating system (and the software that runs on top of it) to malicious developers who will take advantage of any weakness they find. Others say that having full access to the code helps bring about faster improvements and bug fixes to keep those malicious developers from being able to bring the system down. I have, on occasion, dipped into the code of one Linux application or another, and when all was said and done, was happy with the results. Could I have done that with a closed-source Windows application? No.
#2: Licensing freedom vs. licensing restrictionsAlong with access comes the difference between the licenses. I’m sure that every IT professional could go on and on about licensing of PC software. But let’s just look at the key aspect of the licenses (without getting into legalese). With a Linux GPL-licensed operating system, you are free to modify that software and use and even republish or sell it (so long as you make the code available). Also, with the GPL, you can download a single copy of a Linux distribution (or application) and install it on as many machines as you like. With the Microsoft license, you can do none of the above. You are bound to the number of licenses you purchase, so if you purchase 10 licenses, you can legally install that operating system (or application) on only 10 machines.
#3: Online peer support vs. paid help-desk supportThis is one issue where most companies turn their backs on Linux. But it’s really not necessary. With Linux, you have the support of a huge community via forums, online search, and plenty of dedicated Web sites. And of course, if you feel the need, you can purchase support contracts from some of the bigger Linux companies (Red Hat and Novell for instance).
However, when you use the peer support inherent in Linux, you do fall prey to time. You could have an issue with something, send out e-mail to a mailing list or post on a forum, and within 10 minutes be flooded with suggestions. Or these suggestions could take hours of days to come in. It seems all up to chance sometimes. Still, generally speaking, most problems with Linux have been encountered and documented. So chances are good you’ll find your solution fairly quickly.
On the other side of the coin is support for Windows. Yes, you can go the same route with Microsoft and depend upon your peers for solutions. There are just as many help sites/lists/forums for Windows as there are for Linux. And you can purchase support from Microsoft itself. Most corporate higher-ups easily fall victim to the safety net that having a support contract brings. But most higher-ups haven’t had to depend up on said support contract. Of the various people I know who have used either a Linux paid support contract or a Microsoft paid support contract, I can’t say one was more pleased than the other. This of course begs the question “Why do so many say that Microsoft support is superior to Linux paid support?”
#4: Full vs. partial hardware supportOne issue that is slowly becoming nonexistent is hardware support. Years ago, if you wanted to install Linux on a machine you had to make sure you hand-picked each piece of hardware or your installation would not work 100 percent. I can remember, back in 1997-ish, trying to figure out why I couldn’t get Caldera Linux or Red Hat Linux to see my modem. After much looking around, I found I was the proud owner of a Winmodem. So I had to go out and purchase a US Robotics external modem because that was the one modem I knew would work. This is not so much the case now. You can grab a PC (or laptop) and most likely get one or more Linux distributions to install and work nearly 100 percent. But there are still some exceptions. For instance, hibernate/suspend remains a problem with many laptops, although it has come a long way.
With Windows, you know that most every piece of hardware will work with the operating system. Of course, there are times (and I have experienced this over and over) when you will wind up spending much of the day searching for the correct drivers for that piece of hardware you no longer have the install disk for. But you can go out and buy that 10-cent Ethernet card and know it’ll work on your machine (so long as you have, or can find, the drivers). You also can rest assured that when you purchase that insanely powerful graphics card, you will probably be able to take full advantage of its power.
#5: Command line vs. no command lineNo matter how far the Linux operating system has come and how amazing the desktop environment becomes, the command line will always be an invaluable tool for administration purposes. Nothing will ever replace my favorite text-based editor, ssh, and any given command-line tool. I can’t imagine administering a Linux machine without the command line. But for the end user — not so much. You could use a Linux machine for years and never touch the command line. Same with Windows. You can still use the command line with Windows, but not nearly to the extent as with Linux. And Microsoft tends to obfuscate the command prompt from users. Without going to Run and entering cmd (or command, or whichever it is these days), the user won’t even know the command-line tool exists. And if a user does get the Windows command line up and running, how useful is it really?
#6: Centralized vs. noncentralized application installationThe heading for this point might have thrown you for a loop. But let’s think about this for a second. With Linux you have (with nearly every distribution) a centralized location where you can search for, add, or remove software. I’m talking about package management systems, such as Synaptic. With Synaptic, you can open up one tool, search for an application (or group of applications), and install that application without having to do any Web searching (or purchasing).
Windows has nothing like this. With Windows, you must know where to find the software you want to install, download the software (or put the CD into your machine), and run setup.exe or install.exe with a simple double-click. For many years, it was thought that installing applications on Windows was far easier than on Linux. And for many years, that thought was right on target. Not so much now. Installation under Linux is simple, painless, and centralized.
#7: Flexibility vs. rigidityI always compare Linux (especially the desktop) and Windows to a room where the floor and ceiling are either movable or not. With Linux, you have a room where the floor and ceiling can be raised or lowered, at will, as high or low as you want to make them. With Windows, that floor and ceiling are immovable. You can’t go further than Microsoft has deemed it necessary to go.
Take, for instance, the desktop. Unless you are willing to pay for and install a third-party application that can alter the desktop appearance, with Windows you are stuck with what Microsoft has declared is the ideal desktop for you. With Linux, you can pretty much make your desktop look and feel exactly how you want/need. You can have as much or as little on your desktop as you want. From simple flat Fluxbox to a full-blown 3D Compiz experience, the Linux desktop is as flexible an environment as there is on a computer.
#8: Fanboys vs. corporate typesI wanted to add this because even though Linux has reached well beyond its school-project roots, Linux users tend to be soapbox-dwelling fanatics who are quick to spout off about why you should be choosing Linux over Windows. I am guilty of this on a daily basis (I try hard to recruit new fanboys/girls), and it’s a badge I wear proudly. Of course, this is seen as less than professional by some. After all, why would something worthy of a corporate environment have or need cheerleaders? Shouldn’t the software sell itself? Because of the open source nature of Linux, it has to make do without the help of the marketing budgets and deep pockets of Microsoft. With that comes the need for fans to help spread the word. And word of mouth is the best friend of Linux.
Some see the fanaticism as the same college-level hoorah that keeps Linux in the basements for LUG meetings and science projects. But I beg to differ. Another company, thanks to the phenomenon of a simple music player and phone, has fallen into the same fanboy fanaticism, and yet that company’s image has not been besmirched because of that fanaticism. Windows does not have these same fans. Instead, Windows has a league of paper-certified administrators who believe the hype when they hear the misrepresented market share numbers reassuring them they will be employable until the end of time.
#9: Automated vs. nonautomated removable mediaI remember the days of old when you had to mount your floppy to use it and unmount it to remove it. Well, those times are drawing to a close — but not completely. One issue that plagues new Linux users is how removable media is used. The idea of having to manually “mount” a CD drive to access the contents of a CD is completely foreign to new users. There is a reason this is the way it is. Because Linux has always been a multiuser platform, it was thought that forcing a user to mount a media to use it would keep the user’s files from being overwritten by another user. Think about it: On a multiuser system, if everyone had instant access to a disk that had been inserted, what would stop them from deleting or overwriting a file you had just added to the media? Things have now evolved to the point where Linux subsystems are set up so that you can use a removable device in the same way you use them in Windows. But it’s not the norm. And besides, who doesn’t want to manually edit the /etc/fstab fle?
#10: Multilayered run levels vs. a single-layered run levelI couldn’t figure out how best to title this point, so I went with a description. What I’m talking about is Linux’ inherent ability to stop at different run levels. With this, you can work from either the command line (run level 3) or the GUI (run level 5). This can really save your socks when X Windows is fubared and you need to figure out the problem. You can do this by booting into run level 3, logging in as root, and finding/fixing the problem.
With Windows, you’re lucky to get to a command line via safe mode — and then you may or may not have the tools you need to fix the problem. In Linux, even in run level 3, you can still get and install a tool to help you out (hello apt-get install APPLICATION via the command line). Having different run levels is helpful in another way. Say the machine in question is a Web or mail server. You want to give it all the memory you have, so you don’t want the machine to boot into run level 5. However, there are times when you do want the GUI for administrative purposes (even though you can fully administer a Linux server from the command line). Because you can run the startx command from the command line at run level 3, you can still start up X Windows and have your GUI as well. With Windows, you are stuck at the Graphical run level unless you hit a serious problem.
Your call…Those are 10 fundamental differences between Linux and Windows. You can decide for yourself whether you think those differences give the advantage to one operating system or the other. Me? Well I think my reputation (and opinion) precedes me, so I probably don’t need to say I feel strongly that the advantage leans toward Linux.
#1: Full access vs. no accessHaving access to the source code is probably the single most significant difference between Linux and Windows. The fact that Linux belongs to the GNU Public License ensures that users (of all sorts) can access (and alter) the code to the very kernel that serves as the foundation of the Linux operating system. You want to peer at the Windows code? Good luck. Unless you are a member of a very select (and elite, to many) group, you will never lay eyes on code making up the Windows operating system.
You can look at this from both sides of the fence. Some say giving the public access to the code opens the operating system (and the software that runs on top of it) to malicious developers who will take advantage of any weakness they find. Others say that having full access to the code helps bring about faster improvements and bug fixes to keep those malicious developers from being able to bring the system down. I have, on occasion, dipped into the code of one Linux application or another, and when all was said and done, was happy with the results. Could I have done that with a closed-source Windows application? No.
#2: Licensing freedom vs. licensing restrictionsAlong with access comes the difference between the licenses. I’m sure that every IT professional could go on and on about licensing of PC software. But let’s just look at the key aspect of the licenses (without getting into legalese). With a Linux GPL-licensed operating system, you are free to modify that software and use and even republish or sell it (so long as you make the code available). Also, with the GPL, you can download a single copy of a Linux distribution (or application) and install it on as many machines as you like. With the Microsoft license, you can do none of the above. You are bound to the number of licenses you purchase, so if you purchase 10 licenses, you can legally install that operating system (or application) on only 10 machines.
#3: Online peer support vs. paid help-desk supportThis is one issue where most companies turn their backs on Linux. But it’s really not necessary. With Linux, you have the support of a huge community via forums, online search, and plenty of dedicated Web sites. And of course, if you feel the need, you can purchase support contracts from some of the bigger Linux companies (Red Hat and Novell for instance).
However, when you use the peer support inherent in Linux, you do fall prey to time. You could have an issue with something, send out e-mail to a mailing list or post on a forum, and within 10 minutes be flooded with suggestions. Or these suggestions could take hours of days to come in. It seems all up to chance sometimes. Still, generally speaking, most problems with Linux have been encountered and documented. So chances are good you’ll find your solution fairly quickly.
On the other side of the coin is support for Windows. Yes, you can go the same route with Microsoft and depend upon your peers for solutions. There are just as many help sites/lists/forums for Windows as there are for Linux. And you can purchase support from Microsoft itself. Most corporate higher-ups easily fall victim to the safety net that having a support contract brings. But most higher-ups haven’t had to depend up on said support contract. Of the various people I know who have used either a Linux paid support contract or a Microsoft paid support contract, I can’t say one was more pleased than the other. This of course begs the question “Why do so many say that Microsoft support is superior to Linux paid support?”
#4: Full vs. partial hardware supportOne issue that is slowly becoming nonexistent is hardware support. Years ago, if you wanted to install Linux on a machine you had to make sure you hand-picked each piece of hardware or your installation would not work 100 percent. I can remember, back in 1997-ish, trying to figure out why I couldn’t get Caldera Linux or Red Hat Linux to see my modem. After much looking around, I found I was the proud owner of a Winmodem. So I had to go out and purchase a US Robotics external modem because that was the one modem I knew would work. This is not so much the case now. You can grab a PC (or laptop) and most likely get one or more Linux distributions to install and work nearly 100 percent. But there are still some exceptions. For instance, hibernate/suspend remains a problem with many laptops, although it has come a long way.
With Windows, you know that most every piece of hardware will work with the operating system. Of course, there are times (and I have experienced this over and over) when you will wind up spending much of the day searching for the correct drivers for that piece of hardware you no longer have the install disk for. But you can go out and buy that 10-cent Ethernet card and know it’ll work on your machine (so long as you have, or can find, the drivers). You also can rest assured that when you purchase that insanely powerful graphics card, you will probably be able to take full advantage of its power.
#5: Command line vs. no command lineNo matter how far the Linux operating system has come and how amazing the desktop environment becomes, the command line will always be an invaluable tool for administration purposes. Nothing will ever replace my favorite text-based editor, ssh, and any given command-line tool. I can’t imagine administering a Linux machine without the command line. But for the end user — not so much. You could use a Linux machine for years and never touch the command line. Same with Windows. You can still use the command line with Windows, but not nearly to the extent as with Linux. And Microsoft tends to obfuscate the command prompt from users. Without going to Run and entering cmd (or command, or whichever it is these days), the user won’t even know the command-line tool exists. And if a user does get the Windows command line up and running, how useful is it really?
#6: Centralized vs. noncentralized application installationThe heading for this point might have thrown you for a loop. But let’s think about this for a second. With Linux you have (with nearly every distribution) a centralized location where you can search for, add, or remove software. I’m talking about package management systems, such as Synaptic. With Synaptic, you can open up one tool, search for an application (or group of applications), and install that application without having to do any Web searching (or purchasing).
Windows has nothing like this. With Windows, you must know where to find the software you want to install, download the software (or put the CD into your machine), and run setup.exe or install.exe with a simple double-click. For many years, it was thought that installing applications on Windows was far easier than on Linux. And for many years, that thought was right on target. Not so much now. Installation under Linux is simple, painless, and centralized.
#7: Flexibility vs. rigidityI always compare Linux (especially the desktop) and Windows to a room where the floor and ceiling are either movable or not. With Linux, you have a room where the floor and ceiling can be raised or lowered, at will, as high or low as you want to make them. With Windows, that floor and ceiling are immovable. You can’t go further than Microsoft has deemed it necessary to go.
Take, for instance, the desktop. Unless you are willing to pay for and install a third-party application that can alter the desktop appearance, with Windows you are stuck with what Microsoft has declared is the ideal desktop for you. With Linux, you can pretty much make your desktop look and feel exactly how you want/need. You can have as much or as little on your desktop as you want. From simple flat Fluxbox to a full-blown 3D Compiz experience, the Linux desktop is as flexible an environment as there is on a computer.
#8: Fanboys vs. corporate typesI wanted to add this because even though Linux has reached well beyond its school-project roots, Linux users tend to be soapbox-dwelling fanatics who are quick to spout off about why you should be choosing Linux over Windows. I am guilty of this on a daily basis (I try hard to recruit new fanboys/girls), and it’s a badge I wear proudly. Of course, this is seen as less than professional by some. After all, why would something worthy of a corporate environment have or need cheerleaders? Shouldn’t the software sell itself? Because of the open source nature of Linux, it has to make do without the help of the marketing budgets and deep pockets of Microsoft. With that comes the need for fans to help spread the word. And word of mouth is the best friend of Linux.
Some see the fanaticism as the same college-level hoorah that keeps Linux in the basements for LUG meetings and science projects. But I beg to differ. Another company, thanks to the phenomenon of a simple music player and phone, has fallen into the same fanboy fanaticism, and yet that company’s image has not been besmirched because of that fanaticism. Windows does not have these same fans. Instead, Windows has a league of paper-certified administrators who believe the hype when they hear the misrepresented market share numbers reassuring them they will be employable until the end of time.
#9: Automated vs. nonautomated removable mediaI remember the days of old when you had to mount your floppy to use it and unmount it to remove it. Well, those times are drawing to a close — but not completely. One issue that plagues new Linux users is how removable media is used. The idea of having to manually “mount” a CD drive to access the contents of a CD is completely foreign to new users. There is a reason this is the way it is. Because Linux has always been a multiuser platform, it was thought that forcing a user to mount a media to use it would keep the user’s files from being overwritten by another user. Think about it: On a multiuser system, if everyone had instant access to a disk that had been inserted, what would stop them from deleting or overwriting a file you had just added to the media? Things have now evolved to the point where Linux subsystems are set up so that you can use a removable device in the same way you use them in Windows. But it’s not the norm. And besides, who doesn’t want to manually edit the /etc/fstab fle?
#10: Multilayered run levels vs. a single-layered run levelI couldn’t figure out how best to title this point, so I went with a description. What I’m talking about is Linux’ inherent ability to stop at different run levels. With this, you can work from either the command line (run level 3) or the GUI (run level 5). This can really save your socks when X Windows is fubared and you need to figure out the problem. You can do this by booting into run level 3, logging in as root, and finding/fixing the problem.
With Windows, you’re lucky to get to a command line via safe mode — and then you may or may not have the tools you need to fix the problem. In Linux, even in run level 3, you can still get and install a tool to help you out (hello apt-get install APPLICATION via the command line). Having different run levels is helpful in another way. Say the machine in question is a Web or mail server. You want to give it all the memory you have, so you don’t want the machine to boot into run level 5. However, there are times when you do want the GUI for administrative purposes (even though you can fully administer a Linux server from the command line). Because you can run the startx command from the command line at run level 3, you can still start up X Windows and have your GUI as well. With Windows, you are stuck at the Graphical run level unless you hit a serious problem.
Your call…Those are 10 fundamental differences between Linux and Windows. You can decide for yourself whether you think those differences give the advantage to one operating system or the other. Me? Well I think my reputation (and opinion) precedes me, so I probably don’t need to say I feel strongly that the advantage leans toward Linux.
10 tips for implementing green IT
Going green” is the hot new trend in the business world, and that naturally filters down to the IT department. Implemented correctly, eco-friendly tactics can make your operations more efficient and save you money.
The goals of green IT include minimizing the use of hazardous materials, maximizing energy efficiency, and encouraging recycling and/or use of biodegradable products — without negatively affecting productivity. In this article, we’ll look at 10 ways to implement green IT practices in your organization.
#1: Buy energy efficient hardwareNew offerings from major hardware vendors include notebooks, workstations, and servers that meet the EPA’s Energy Star guidelines for lower power consumption. Look for systems that have good EPEAT ratings (www.epeat.net). The ratings use standards set by the IEEE to measure “environmental performance.” All EPEAT-registered products must meet Energy Star 4.0 criteria.
Multicore processors increase processing output without substantially increasing energy usage. Also look for high efficiency (80%) power supplies, variable speed temperature controlled fans, small form factor hard drives, and low voltage processors.
#2: Use power management technology and best practicesModern operating systems running on Advanced Configuration and Power Interface (ACPI)-enabled systems incorporate power-saving features that allow you to configure monitors and hard disks to power down after a specified period of inactivity. Systems can be set to hibernate when not in use, thus powering down the CPU and RAM as well.
Hardware vendors have their own power management software, which they load on their systems or offer as options. For example, HP’s Power Manager provides real-time reporting that shows how the settings you have configured affect the energy used by the computer.
There are also many third-party power management products that can provide further flexibility and control over computers’ energy consumption. Some programs make it possible to manually reduce the power voltage to the CPU. Others can handle it automatically on systems with Intel SpeedStep or AMD Cool’n'Quiet technologies.
Other technologies, such as Intel’s vPro, allow you to turn computers on and off remotely, thus saving energy because you don’t have to leave systems on if you want, for example, to schedule a patch deployment at 2:00 A.M.
#3: Use virtualization technology to consolidate serversYou can reduce the number of physical servers, and thus the energy consumption, by using virtualization technology to run multiple virtual machines on a single physical server. Because many servers are severely underutilized (in many cases, in use only 10 to 15 percent of the time they’re running), the savings can be dramatic. VMWare claims that its virtualized infrastructure can decrease energy costs by as much as 80 percent.
The same type of benefits can be realized with Microsoft’s Hyper-V virtualization technology, which is an integrated operating system feature of Windows Server 2008.
#4: Consolidate storage with SAN/NAS solutionsJust as server consolidation saves energy, so does consolidation of storage using storage area networks and network attached storage solutions. The Storage Networking Industry Association (SNIA) proposes such practices as powering down selected drives, using slower drives where possible, and not overbuilding power/cooling equipment based on peak power requirements shown in label ratings.
#5: Optimize data center designData centers are huge consumers of energy, and cooling all the equipment is a big issue. Data center design that incorporates hot aisle and cold aisle layout, coupled cooling (placing cooling systems closer to heat sources), and liquid cooling can tremendously reduce the energy needed to run the data center.
Another way to “green” the data center is to use low-powered blade servers and more energy-efficient uninterruptible power supplies, which can use 70 percent less power than a legacy UPS.
Optimum data center design for saving energy should also take into account the big picture, by considering the use of alternative energy technologies (photovoltaics, evaporative cooling, etc.) and catalytic converters on backup generators, and from the ground up, by minimizing the footprints of the buildings themselves. Energy-monitoring systems provide the information you need to measure efficiency. This Microsoft TechNet article discusses various ways to build a green data center.
#6: Use thin clients to reduce GPU power usageAnother way to reduce the amount of energy consumed by computers is to deploy thin clients. Because most of the processing is done on the server, the thin clients use very little energy. In fact, a typical thin client uses less power while up and running applications than an Energy Star compliant PC uses in sleep mode. Thin clients are also ecologically friendly because they generate less e-waste. There’s no hard drive, less memory, and fewer components to be dealt with at the end of their lifecycles.
Last year, a Verizon spokesman said the company had decreased energy consumption by 30 percent by replacing PCs with thin clients, saving about $1 million per year.
#7: Use more efficient displaysIf you have old CRT monitors still in use, replacing them with LCD displays can save up to 70 percent in energy costs. However, not all LCD monitors are created equal when it comes to power consumption. High efficiency LCDs are available from several vendors.
LG recently released what it claims is the world’s most energy efficient LCD monitor, the Flatron W2252TE. Tests have shown that it uses less than half the power of conventional 22-inch monitors.
#8: Recycle systems and suppliesTo reduce the load on already overtaxed landfills and to avoid sending hazardous materials to those landfills (where they can leach into the environment and cause harm), old systems and supplies can be reused, repurposed, and/or recycled. You can start by repurposing items within the company; for example, in many cases, when a graphics designer or engineer needs a new high end workstation to run resource-hungry programs, the old computer is perfectly adequate for use by someone doing word processing, spreadsheets, or other less intensive tasks. This hand-me-down method allows two workers to get better systems than they had, while requiring the purchase of only one new machine (thus saving money and avoiding unnecessary e-waste).
Old electronics devices can also be reused by those outside the company. You can donate old computers and other devices still in working order to schools and nonprofit organizations, which can still get a lot of use out of them. Finally, much electronic waste can be recycled, the parts used to make new items. Things like old printer cartridges, old cell phones, and paper can all be recycled. Some computer vendors, such as Dell, have programs to take back computers and peripherals for recycling.
#9: Reduce paper consumptionAnother way to save money while reducing your company’s impact on the environment is to reduce your consumption of paper. You can do this by switching from a paper-based to an electronic workflow: creating, editing, viewing, and delivering documents in digital rather than printed form. Send documents as e-mail attachments rather than faxing.
And when printing is unavoidable, you can still reduce waste and save money by setting your printers to use duplex (double-sided) printing. An internal study conducted by HP showed that a Fortune 500 company can save 800 tons of paper per year (a savings of over $7 million) by printing on both sides.
#10: Encourage telecommutingThe ultimate way to have a greener office to have less office. By encouraging as many workers as possible to telecommute, you can reduce the amount of office space that needs to be heated and cooled, the number of computers required on site, and the number of miles driven by employees to get to and from work. Telecommuting reduces costs for both employers and employees and can also reduce the spread of contagious diseases.
The goals of green IT include minimizing the use of hazardous materials, maximizing energy efficiency, and encouraging recycling and/or use of biodegradable products — without negatively affecting productivity. In this article, we’ll look at 10 ways to implement green IT practices in your organization.
#1: Buy energy efficient hardwareNew offerings from major hardware vendors include notebooks, workstations, and servers that meet the EPA’s Energy Star guidelines for lower power consumption. Look for systems that have good EPEAT ratings (www.epeat.net). The ratings use standards set by the IEEE to measure “environmental performance.” All EPEAT-registered products must meet Energy Star 4.0 criteria.
Multicore processors increase processing output without substantially increasing energy usage. Also look for high efficiency (80%) power supplies, variable speed temperature controlled fans, small form factor hard drives, and low voltage processors.
#2: Use power management technology and best practicesModern operating systems running on Advanced Configuration and Power Interface (ACPI)-enabled systems incorporate power-saving features that allow you to configure monitors and hard disks to power down after a specified period of inactivity. Systems can be set to hibernate when not in use, thus powering down the CPU and RAM as well.
Hardware vendors have their own power management software, which they load on their systems or offer as options. For example, HP’s Power Manager provides real-time reporting that shows how the settings you have configured affect the energy used by the computer.
There are also many third-party power management products that can provide further flexibility and control over computers’ energy consumption. Some programs make it possible to manually reduce the power voltage to the CPU. Others can handle it automatically on systems with Intel SpeedStep or AMD Cool’n'Quiet technologies.
Other technologies, such as Intel’s vPro, allow you to turn computers on and off remotely, thus saving energy because you don’t have to leave systems on if you want, for example, to schedule a patch deployment at 2:00 A.M.
#3: Use virtualization technology to consolidate serversYou can reduce the number of physical servers, and thus the energy consumption, by using virtualization technology to run multiple virtual machines on a single physical server. Because many servers are severely underutilized (in many cases, in use only 10 to 15 percent of the time they’re running), the savings can be dramatic. VMWare claims that its virtualized infrastructure can decrease energy costs by as much as 80 percent.
The same type of benefits can be realized with Microsoft’s Hyper-V virtualization technology, which is an integrated operating system feature of Windows Server 2008.
#4: Consolidate storage with SAN/NAS solutionsJust as server consolidation saves energy, so does consolidation of storage using storage area networks and network attached storage solutions. The Storage Networking Industry Association (SNIA) proposes such practices as powering down selected drives, using slower drives where possible, and not overbuilding power/cooling equipment based on peak power requirements shown in label ratings.
#5: Optimize data center designData centers are huge consumers of energy, and cooling all the equipment is a big issue. Data center design that incorporates hot aisle and cold aisle layout, coupled cooling (placing cooling systems closer to heat sources), and liquid cooling can tremendously reduce the energy needed to run the data center.
Another way to “green” the data center is to use low-powered blade servers and more energy-efficient uninterruptible power supplies, which can use 70 percent less power than a legacy UPS.
Optimum data center design for saving energy should also take into account the big picture, by considering the use of alternative energy technologies (photovoltaics, evaporative cooling, etc.) and catalytic converters on backup generators, and from the ground up, by minimizing the footprints of the buildings themselves. Energy-monitoring systems provide the information you need to measure efficiency. This Microsoft TechNet article discusses various ways to build a green data center.
#6: Use thin clients to reduce GPU power usageAnother way to reduce the amount of energy consumed by computers is to deploy thin clients. Because most of the processing is done on the server, the thin clients use very little energy. In fact, a typical thin client uses less power while up and running applications than an Energy Star compliant PC uses in sleep mode. Thin clients are also ecologically friendly because they generate less e-waste. There’s no hard drive, less memory, and fewer components to be dealt with at the end of their lifecycles.
Last year, a Verizon spokesman said the company had decreased energy consumption by 30 percent by replacing PCs with thin clients, saving about $1 million per year.
#7: Use more efficient displaysIf you have old CRT monitors still in use, replacing them with LCD displays can save up to 70 percent in energy costs. However, not all LCD monitors are created equal when it comes to power consumption. High efficiency LCDs are available from several vendors.
LG recently released what it claims is the world’s most energy efficient LCD monitor, the Flatron W2252TE. Tests have shown that it uses less than half the power of conventional 22-inch monitors.
#8: Recycle systems and suppliesTo reduce the load on already overtaxed landfills and to avoid sending hazardous materials to those landfills (where they can leach into the environment and cause harm), old systems and supplies can be reused, repurposed, and/or recycled. You can start by repurposing items within the company; for example, in many cases, when a graphics designer or engineer needs a new high end workstation to run resource-hungry programs, the old computer is perfectly adequate for use by someone doing word processing, spreadsheets, or other less intensive tasks. This hand-me-down method allows two workers to get better systems than they had, while requiring the purchase of only one new machine (thus saving money and avoiding unnecessary e-waste).
Old electronics devices can also be reused by those outside the company. You can donate old computers and other devices still in working order to schools and nonprofit organizations, which can still get a lot of use out of them. Finally, much electronic waste can be recycled, the parts used to make new items. Things like old printer cartridges, old cell phones, and paper can all be recycled. Some computer vendors, such as Dell, have programs to take back computers and peripherals for recycling.
#9: Reduce paper consumptionAnother way to save money while reducing your company’s impact on the environment is to reduce your consumption of paper. You can do this by switching from a paper-based to an electronic workflow: creating, editing, viewing, and delivering documents in digital rather than printed form. Send documents as e-mail attachments rather than faxing.
And when printing is unavoidable, you can still reduce waste and save money by setting your printers to use duplex (double-sided) printing. An internal study conducted by HP showed that a Fortune 500 company can save 800 tons of paper per year (a savings of over $7 million) by printing on both sides.
#10: Encourage telecommutingThe ultimate way to have a greener office to have less office. By encouraging as many workers as possible to telecommute, you can reduce the amount of office space that needs to be heated and cooled, the number of computers required on site, and the number of miles driven by employees to get to and from work. Telecommuting reduces costs for both employers and employees and can also reduce the spread of contagious diseases.
10 surprising things about Windows Server 2008
Windows Server 2003 felt like a refresh of Windows Server 2000. There were few radical changes, and most of the improvements were fairly under the surface. Windows Server 2008, on the other hand, is a full-size helping of “new and improved.” While the overall package is quite good, there are a few surprises, “gotchas,” and hidden delights you will want to know about before deciding if you will be moving to Windows Server 2008 any time soon.
#1: The 64-bit revolution is not completeThere have been 64-bit editions of Windows Server for years now, and Microsoft has made it quite clear that it wants all of its customers to move to 64-bit operating systems. That does not mean that you can throw away your 32-bit Windows Server 2008 CD, though! Over the last few months, I have been shocked on more than one occasion by the pieces of Microsoft software that not only do not have 64-bit versions, but will not run under a 64-bit OS at all. This list includes Team Foundation Server and ISA Server. If you are planning on moving to 64-bit Windows Server 2008, be prepared to have a 32-bit server or two around, whether it be on physical hardware or in a VM.
#2: Who moved my cheese?While the UI changes in Windows Server 2008 are not nearly as sweeping as the Aero interface in Vista, it has undergone a dramatic rearrangement and renaming of the various applets around the system. In retrospect, the organization of these items is much more sensible, but that hardly matters when you have years of experience going to a particular area to find something, only to have it suddenly change. Expect to be a bit frustrated in the Control Panel until you get used to it.
#3: Windows Workstation 2008 might catch onIn an odd turn of events, Microsoft has provided the ability to bring the “Vista Desktop Experience” into Windows Server 2008. I doubt that many server administrators were asking for this, but the unusual result is that a number of people are modifying Windows Server 2008 to be as close to a desktop OS as possible. There have always been a few people who use the server edition of Windows as a desktop, but this makes it much easier and friendlier. These home-brewed efforts are generally called “Windows Workstation 2008,” in case you’re interested in trying it out on your own.
#4: Hyper-V is good, but…Hyper-V was one of the most anticipated features of Windows Server 2008, and it’s surprisingly good, particularly for a version 1 release from Microsoft. It is stable, easy to install and configure, and does not seem to have any major problems. For those of us who have been beaten into the “wait until the third version” or “don’t install until SP1″ mentality, this is a refreshing surprise.
#5: …Hyper-V is limitedHyper-V, while of high quality, is sorely lacking features. Considering that it was billed as a real alternative to VMWare and other existing solutions, it is a disappointment (to say the least) that it does not seem to include any utilities for importing VMs from products other than Virtual PC and Virtual Server. Even those imports are not workaround-free. Another real surprise here is the lack of a physical-to-virtual conversion utility. Hyper-V may be a good system, but make sure that you fully try it out before you commit to using it.
#6: NT 4 domain migration — it’s not happeningIf you have been putting off the painful migration from your NT 4 domain until Windows Server 2008 was released, don’t keep waiting. The older version (3.0) Active Directory Migration Tool (ADMT) supports migrations from NT 4, but not to Windows Server 2008. The latest version (3.1) support migrations to Windows Server 2008, but not from NT 4. Either migrate from NT 4 before changing your domain to be a Windows 2008 domain or get your NT 4 domain upgraded first.
#7: The ashtrays are now optionalIn prior versions of Windows Server, a lot of applications came installed by default. No one ever uninstalled them because they did not cause any harm, even if you didn’t use them or installed an alternative. Now, even the “throwaway” applications, like Windows Backup, are not installed by default. After installation, you need to add “features” to get the full Windows Server suite of applications. This can be frustrating if you are in a hurry, but the reduced clutter and resource overhead are worth it.
#8: Licensing is bewilderingContinuing a hallowed Microsoft tradition, trying to understand the licensing terms of Windows Server 2008 feels like hammering nails with your forehead. So maybe this isn’t so much a surprise as a gotcha. The Standard Edition makes sense, but when you get into the issues around virtualization in Enterprise and Datacenter Editions, things can be a bit confusing. Depending upon your need for virtual machines and the number of physical CPUs (not CPU cores, thankfully) in your server, Enterprise Edition may be cheaper — or it may be more expensive than Datacenter Edition. One thing to keep in mind is that once you start using virtual machines, you start to like them a lot more that you thought you would. It’s easy to find yourself using a lot more of them than originally expected.
#9: There’s no bloatMaybe it’s because Vista set expectations of pain, or because hardware has gotten so much cheaper, but Windows Server 2008 does not feel bloated or slow at all. Microsoft has done a pretty good job at minimizing the installed feature set to the bare minimum, and Server Core can take that even further. Depending upon your needs, it can be quite possible to upgrade even older equipment to Windows Server 2008 without needing to beef up the hardware.
#10: Quality beats expectationsMicrosoft customers have developed low expectations of quality over the years, unfortunately, with good reason. While its track record for initial releases, in terms of security holes and bug counts, seems to be improving customers are still howling about Vista. As a result, it has come as a real surprise that the overall reaction to Windows Server 2008 has been muted, to say the least. The horror stories just are not flying around like they were with Vista. Maybe it’s the extra year they spent working on it, or different expectations of the people who work with servers, but Windows Server 2008 has had a pretty warm reception so far. And that speaks a lot to its quality. There is nothing particularly flashy or standout about it. But at the same time, it is a solid, high quality product. And that is exactly what system administrators need.
#1: The 64-bit revolution is not completeThere have been 64-bit editions of Windows Server for years now, and Microsoft has made it quite clear that it wants all of its customers to move to 64-bit operating systems. That does not mean that you can throw away your 32-bit Windows Server 2008 CD, though! Over the last few months, I have been shocked on more than one occasion by the pieces of Microsoft software that not only do not have 64-bit versions, but will not run under a 64-bit OS at all. This list includes Team Foundation Server and ISA Server. If you are planning on moving to 64-bit Windows Server 2008, be prepared to have a 32-bit server or two around, whether it be on physical hardware or in a VM.
#2: Who moved my cheese?While the UI changes in Windows Server 2008 are not nearly as sweeping as the Aero interface in Vista, it has undergone a dramatic rearrangement and renaming of the various applets around the system. In retrospect, the organization of these items is much more sensible, but that hardly matters when you have years of experience going to a particular area to find something, only to have it suddenly change. Expect to be a bit frustrated in the Control Panel until you get used to it.
#3: Windows Workstation 2008 might catch onIn an odd turn of events, Microsoft has provided the ability to bring the “Vista Desktop Experience” into Windows Server 2008. I doubt that many server administrators were asking for this, but the unusual result is that a number of people are modifying Windows Server 2008 to be as close to a desktop OS as possible. There have always been a few people who use the server edition of Windows as a desktop, but this makes it much easier and friendlier. These home-brewed efforts are generally called “Windows Workstation 2008,” in case you’re interested in trying it out on your own.
#4: Hyper-V is good, but…Hyper-V was one of the most anticipated features of Windows Server 2008, and it’s surprisingly good, particularly for a version 1 release from Microsoft. It is stable, easy to install and configure, and does not seem to have any major problems. For those of us who have been beaten into the “wait until the third version” or “don’t install until SP1″ mentality, this is a refreshing surprise.
#5: …Hyper-V is limitedHyper-V, while of high quality, is sorely lacking features. Considering that it was billed as a real alternative to VMWare and other existing solutions, it is a disappointment (to say the least) that it does not seem to include any utilities for importing VMs from products other than Virtual PC and Virtual Server. Even those imports are not workaround-free. Another real surprise here is the lack of a physical-to-virtual conversion utility. Hyper-V may be a good system, but make sure that you fully try it out before you commit to using it.
#6: NT 4 domain migration — it’s not happeningIf you have been putting off the painful migration from your NT 4 domain until Windows Server 2008 was released, don’t keep waiting. The older version (3.0) Active Directory Migration Tool (ADMT) supports migrations from NT 4, but not to Windows Server 2008. The latest version (3.1) support migrations to Windows Server 2008, but not from NT 4. Either migrate from NT 4 before changing your domain to be a Windows 2008 domain or get your NT 4 domain upgraded first.
#7: The ashtrays are now optionalIn prior versions of Windows Server, a lot of applications came installed by default. No one ever uninstalled them because they did not cause any harm, even if you didn’t use them or installed an alternative. Now, even the “throwaway” applications, like Windows Backup, are not installed by default. After installation, you need to add “features” to get the full Windows Server suite of applications. This can be frustrating if you are in a hurry, but the reduced clutter and resource overhead are worth it.
#8: Licensing is bewilderingContinuing a hallowed Microsoft tradition, trying to understand the licensing terms of Windows Server 2008 feels like hammering nails with your forehead. So maybe this isn’t so much a surprise as a gotcha. The Standard Edition makes sense, but when you get into the issues around virtualization in Enterprise and Datacenter Editions, things can be a bit confusing. Depending upon your need for virtual machines and the number of physical CPUs (not CPU cores, thankfully) in your server, Enterprise Edition may be cheaper — or it may be more expensive than Datacenter Edition. One thing to keep in mind is that once you start using virtual machines, you start to like them a lot more that you thought you would. It’s easy to find yourself using a lot more of them than originally expected.
#9: There’s no bloatMaybe it’s because Vista set expectations of pain, or because hardware has gotten so much cheaper, but Windows Server 2008 does not feel bloated or slow at all. Microsoft has done a pretty good job at minimizing the installed feature set to the bare minimum, and Server Core can take that even further. Depending upon your needs, it can be quite possible to upgrade even older equipment to Windows Server 2008 without needing to beef up the hardware.
#10: Quality beats expectationsMicrosoft customers have developed low expectations of quality over the years, unfortunately, with good reason. While its track record for initial releases, in terms of security holes and bug counts, seems to be improving customers are still howling about Vista. As a result, it has come as a real surprise that the overall reaction to Windows Server 2008 has been muted, to say the least. The horror stories just are not flying around like they were with Vista. Maybe it’s the extra year they spent working on it, or different expectations of the people who work with servers, but Windows Server 2008 has had a pretty warm reception so far. And that speaks a lot to its quality. There is nothing particularly flashy or standout about it. But at the same time, it is a solid, high quality product. And that is exactly what system administrators need.
10 ways to get maximum value from a professional development class
From time to time you will find yourself taking a professional development class. It could cover communications, conflict management, business writing, or some other area. It might be a class that’s internal to your company, or it might be a class you attend outside, with people from other companies. In any case, your company (or you personally) made a substantial investment in this training. Here are pointers for management — and for you — to ensure both of you gain maximum value from the class.
#1: Management should attend
I wish I had a dollar for every time, during a session I teach, a non-management attendee said to me, “Calvin, your material is great, but you need to be saying this to our bosses.” On the other hand, lest I become too vain, maybe there are others who said to themselves, “This was a waste of time, so our managers should suffer as well.”
In either case, management increases its credibility among staff by attending the same training. Unless it does so, the chances are great the management may undercut the philosophy that the class is attempting to impart.
By the way, if you hold to the “waste of time” view, please see point 5 below.
#2: Separate managers from subordinates
It’s generally inadvisable to have managers in the same entire class with direct subordinates. The presence of the former could inhibit the latter from speaking up, particularly when organizational issues and policies are being discussed.
Two alternatives address this concern. First, management can attend its own separate session. Second, management can attend the same session as direct subordinates, but 30 to 45 minutes from the end, can be excused. At that point, staff attendees who have issues can raise them. In other words, that’s the time attendees can start saying, “Calvin, you’re right in what you’re saying, but that won’t work here because…”
#3: Management must respect class time
If management is sending staff to training, it has to respect that time. The “tap on the shoulder” to handle an issue that takes “just a second” of course never takes that long. It ends up taking that attendee out of class completely. When that happens, it defeats the purpose of having that person attend class. Management needs to respect the time that the attendee is in class.
#4: Distribute attendance among many departments
Given the choice of having many attendees from one (or only a few departments) vs. having only a few attendees from many departments, I choose the latter. From a practical standpoint, this strategy reduces the burden on those who aren’t attending class but still must support business operations. From an organizational standpoint, the latter approach can help build morale by giving an attendee exposure to other departments and department workers.
#5: Recognize the value of the training
From time to time, when I talk about skills in communicating with customers, I see people with rolling eyes and folded arms. No doubt they’re saying to themselves, “Why am I wasting my time here? I could be writing a program / configuring a router / completing a problem ticket.”
That’s why I often open with a quiz: what do Operating System/2, Betamax, and the Dvorak keyboard all have in common? Answer: They were technically superior to their competition but nonetheless became obsolete. In the same way, technical people who rely only on their technical skills for career success could be in for a shock, because skill in working with others is at least as important, if not more so.
Try to keep an open mind. Will some training turn out to be a “bomb”? I hope not, but even in that case, you can still benefit. Sit down and analyze why you thought the session failed. Then, before your next session, resolve to discuss those concerns with the instructor if you can.
#6: Make sure your job is covered during your absence
You can do your part to avoid getting the aforementioned tap on the shoulder by the boss. Make sure your co-workers and customers are aware of your absence. Adjust your voicemail greeting and set an e-mail or instant message autorespond, if you can. Make sure they know of any open items or issues and how they should be handled.
#7: Have specific personal objectives
Your time in class will be far more meaningful if you set personal objectives for yourself beforehand. Read up on any class descriptions and syllabi or topic list. Then, go over mentally the areas where you believe you most need improvement. When you set your objectives, make sure they are measurable — and more important, that they’re realistic.
#8: Speak up
The biggest shock to many would-be law students is the total irrelevance of class participation in one’s final grade. Nonetheless, I still remember Professor Woodward’s advice in contracts class. He said that we still should speak in class, because doing so forces us to master the material. In other words, we may think we know the material, but having to articulate it is the acid test.
You probably won’t get a grade for your professional development class. However, you probably will pick up the concepts more quickly, and retain them better, if you speak up.
#9: Apply exercises and activities to your job
Those exercises where you walk the maze, build the toothpick tower, or sequence the 15 items to help you survive the desert aren’t there just for the heck of it. They’re there because they deal with some skill that’s important to your job. The instructor or facilitator, in discussing the exercise afterward, should be making that association. If not, make it yourself. Write a note to yourself about the lessons you learned from the exercise. In particular, ask yourself how these lessons apply to your job and how you might act differently having gained the insights you did.
#10: Write a letter to yourself
At the end of sessions I lead, I ask attendees to write a letter to themselves about what they learned. I then take those letters and simply hold them for about three months, after which I return them to their respective authors. I do so because many attendees remember clearly the material immediately after class. However, in the weeks that follow, their memories may dim. Seeing the letter refreshes their memory and reinforces the class session.
If the leader of your session doesn’t follow this practice, consider doing it on your own. Write a letter, seal it, and just put it somewhere that it won’t get lost. Maybe write a note on the outside, such as, “Open on [date three months from now].”
#1: Management should attend
I wish I had a dollar for every time, during a session I teach, a non-management attendee said to me, “Calvin, your material is great, but you need to be saying this to our bosses.” On the other hand, lest I become too vain, maybe there are others who said to themselves, “This was a waste of time, so our managers should suffer as well.”
In either case, management increases its credibility among staff by attending the same training. Unless it does so, the chances are great the management may undercut the philosophy that the class is attempting to impart.
By the way, if you hold to the “waste of time” view, please see point 5 below.
#2: Separate managers from subordinates
It’s generally inadvisable to have managers in the same entire class with direct subordinates. The presence of the former could inhibit the latter from speaking up, particularly when organizational issues and policies are being discussed.
Two alternatives address this concern. First, management can attend its own separate session. Second, management can attend the same session as direct subordinates, but 30 to 45 minutes from the end, can be excused. At that point, staff attendees who have issues can raise them. In other words, that’s the time attendees can start saying, “Calvin, you’re right in what you’re saying, but that won’t work here because…”
#3: Management must respect class time
If management is sending staff to training, it has to respect that time. The “tap on the shoulder” to handle an issue that takes “just a second” of course never takes that long. It ends up taking that attendee out of class completely. When that happens, it defeats the purpose of having that person attend class. Management needs to respect the time that the attendee is in class.
#4: Distribute attendance among many departments
Given the choice of having many attendees from one (or only a few departments) vs. having only a few attendees from many departments, I choose the latter. From a practical standpoint, this strategy reduces the burden on those who aren’t attending class but still must support business operations. From an organizational standpoint, the latter approach can help build morale by giving an attendee exposure to other departments and department workers.
#5: Recognize the value of the training
From time to time, when I talk about skills in communicating with customers, I see people with rolling eyes and folded arms. No doubt they’re saying to themselves, “Why am I wasting my time here? I could be writing a program / configuring a router / completing a problem ticket.”
That’s why I often open with a quiz: what do Operating System/2, Betamax, and the Dvorak keyboard all have in common? Answer: They were technically superior to their competition but nonetheless became obsolete. In the same way, technical people who rely only on their technical skills for career success could be in for a shock, because skill in working with others is at least as important, if not more so.
Try to keep an open mind. Will some training turn out to be a “bomb”? I hope not, but even in that case, you can still benefit. Sit down and analyze why you thought the session failed. Then, before your next session, resolve to discuss those concerns with the instructor if you can.
#6: Make sure your job is covered during your absence
You can do your part to avoid getting the aforementioned tap on the shoulder by the boss. Make sure your co-workers and customers are aware of your absence. Adjust your voicemail greeting and set an e-mail or instant message autorespond, if you can. Make sure they know of any open items or issues and how they should be handled.
#7: Have specific personal objectives
Your time in class will be far more meaningful if you set personal objectives for yourself beforehand. Read up on any class descriptions and syllabi or topic list. Then, go over mentally the areas where you believe you most need improvement. When you set your objectives, make sure they are measurable — and more important, that they’re realistic.
#8: Speak up
The biggest shock to many would-be law students is the total irrelevance of class participation in one’s final grade. Nonetheless, I still remember Professor Woodward’s advice in contracts class. He said that we still should speak in class, because doing so forces us to master the material. In other words, we may think we know the material, but having to articulate it is the acid test.
You probably won’t get a grade for your professional development class. However, you probably will pick up the concepts more quickly, and retain them better, if you speak up.
#9: Apply exercises and activities to your job
Those exercises where you walk the maze, build the toothpick tower, or sequence the 15 items to help you survive the desert aren’t there just for the heck of it. They’re there because they deal with some skill that’s important to your job. The instructor or facilitator, in discussing the exercise afterward, should be making that association. If not, make it yourself. Write a note to yourself about the lessons you learned from the exercise. In particular, ask yourself how these lessons apply to your job and how you might act differently having gained the insights you did.
#10: Write a letter to yourself
At the end of sessions I lead, I ask attendees to write a letter to themselves about what they learned. I then take those letters and simply hold them for about three months, after which I return them to their respective authors. I do so because many attendees remember clearly the material immediately after class. However, in the weeks that follow, their memories may dim. Seeing the letter refreshes their memory and reinforces the class session.
If the leader of your session doesn’t follow this practice, consider doing it on your own. Write a letter, seal it, and just put it somewhere that it won’t get lost. Maybe write a note on the outside, such as, “Open on [date three months from now].”
10 reasons why you should use the Opera browser
I have gone through many browsers in my lifetime of IT. From Lynx to Mosaic to Mozilla to Netscape to Firefox to Internet Explorer to Safari to Flock. But there’s another browser that peeks its head in and out of that cycle — Opera. Opera is a browser that gets little press in the battle for Internet supremacy. But it’s a browser that is making huge waves in other arenas (Can you say “mobile”?) and is always a steady player in the browser market.
But why would you want to use a browser that gets little love in the market? I will give you 10 good reasons.
#1: SpeedIt seems no matter how many leaps and bounds Firefox and Internet Explorer make, Opera is always able to render pages faster. In both cold and warm starts, Opera beats both Firefox and Internet explorer. We’re not talking about a difference the naked eye is incapable of seeing. The speed difference is actually noticeable. So if you are a speed junky, and most of you are, you should be using Opera for this reason alone.
#2: Speed DialSpeed Dial is one of those features that generally steals the show with browsers. It’s basically a set of visual bookmarks on one page. To add a page to Speed Dial, you simply click on an empty slot in the Speed Dial page and enter the information.When you have a full page of Speed Dial bookmarks, you can quickly go to the page you want by clicking the related image. For even faster browsing, you can click the Ctrl + * key combination (Where * is the number 1-9 associated with your page as assigned in Speed Dial).
#3: WidgetsOpera Widgets are like Firefox extensions on steroids. Widgets are what the evolution of the Web is all about — little Web-based applications you can run from inside (or, in some cases, outside) your browser. Some of the widgets are useful (such as the Touch The Sky international weather applet) and some are just fun (such as the Sim Aquarium.) They are just as easy to install as Firefox extensions.
#4: WandSave form information and/or passwords with this handy tool. Every time you fill out a form or a password, the Wand will ask you if you want to save the information. When you save information (say a form), a yellow border will appear around the form. The next time you need to fill out that form, click on the Wand button or click Ctrl + Enter, and the information will automatically be filled out for you.
#5: NotesHave you ever been browsing and wanted to take notes on a page or site (or about something totally unrelated to your Web browsing)? Opera comes complete with a small Notes application that allows you to jot down whatever you need to jot down. To access Note, click on the Tools menu and then click on Notes. The tool itself is incredibly simple to use and equally as handy.
#6: BitTorrentYes it is true, Opera has a built-in BitTorrent protocol. And the built-in BitTorrent client is simple to use: Click on a Torrent link, and a dialog will open asking you where you want to download the file. The Torrent client is enabled by default, so if your company doesn’t allow Torrenting, you should probably disable this feature. Note: When downloading Torrents, you will continue to share content until you either stop the download or close the browser.
#7: Display modesAnother unique-to-Opera feature is its display modes, which allows you to quickly switch between Fit To Width and Full Screen mode. Fit To Width mode adjusts the page size to the available screen space while using flexible reformatting. Full Screen mode gives over the entire screen space to browsing. In this mode, you drop all menus and toolbars, leaving only context menus, mouse gestures, and keyboard shortcuts. The latter mode is especially good for smaller screens.
#8: Quick PreferencesThe Quick Preferences menu is one of those features the power user will really appreciate. I am quite often using it to enable/disable various features, and not having to open up the Preferences window makes for a much quicker experience. From this menu, you can alter preferences for pop-ups, images, Java/JavaScript, plug-ins, cookies, and proxies. This is perfect when you are one of those users who block cookies all the time, until a site comes along where you want to enable cookies.
#9: Mouse GesturesThis feature tends to bother most keyboard junkies (those who can’t stand to move their fingers from the keyboard.) But Mouse Gestures is a built-in feature that applies certain actions to specific mouse movements (or actions). For example, you can go back a page by holding down the right mouse button and clicking the left mouse button. This is pretty handy on a laptop, where using the track pad can take more time than you probably want to spend on navigation. But even for those who prefer to keep their hands on the keys and not the mouse, the feature can still save time. Instead of having to get to the mouse, move the mouse to the toolbar, and click a button, you simply have to get your hands to the mouse and make the gesture for the action to take place. Of course, this does require the memorization of the gestures.
#10: Session savingI love this feature. All too many times, I have needed to close a browser window but didn’t want to lose a page. To keep from losing the page, I would keep a temporary bookmark file where I could house these bookmarks. But with Opera, that’s history. If you have a page (or number of pages) you want to save, you just go to the File menu and then the Sessions submenu and click Save This Session. The next time you open Opera, the same tabs will open. You can also manage your saved sessions so that you can save multiple sessions and delete selected sessions.
The upshotWith just the above list, you can see how easily Opera separates itself from the rest of the crowd. It’s a different beast in the Web browsing space. It’s fast, stable, and cross platform, and it contains many features other browsers can’t touch.
But why would you want to use a browser that gets little love in the market? I will give you 10 good reasons.
#1: SpeedIt seems no matter how many leaps and bounds Firefox and Internet Explorer make, Opera is always able to render pages faster. In both cold and warm starts, Opera beats both Firefox and Internet explorer. We’re not talking about a difference the naked eye is incapable of seeing. The speed difference is actually noticeable. So if you are a speed junky, and most of you are, you should be using Opera for this reason alone.
#2: Speed DialSpeed Dial is one of those features that generally steals the show with browsers. It’s basically a set of visual bookmarks on one page. To add a page to Speed Dial, you simply click on an empty slot in the Speed Dial page and enter the information.When you have a full page of Speed Dial bookmarks, you can quickly go to the page you want by clicking the related image. For even faster browsing, you can click the Ctrl + * key combination (Where * is the number 1-9 associated with your page as assigned in Speed Dial).
#3: WidgetsOpera Widgets are like Firefox extensions on steroids. Widgets are what the evolution of the Web is all about — little Web-based applications you can run from inside (or, in some cases, outside) your browser. Some of the widgets are useful (such as the Touch The Sky international weather applet) and some are just fun (such as the Sim Aquarium.) They are just as easy to install as Firefox extensions.
#4: WandSave form information and/or passwords with this handy tool. Every time you fill out a form or a password, the Wand will ask you if you want to save the information. When you save information (say a form), a yellow border will appear around the form. The next time you need to fill out that form, click on the Wand button or click Ctrl + Enter, and the information will automatically be filled out for you.
#5: NotesHave you ever been browsing and wanted to take notes on a page or site (or about something totally unrelated to your Web browsing)? Opera comes complete with a small Notes application that allows you to jot down whatever you need to jot down. To access Note, click on the Tools menu and then click on Notes. The tool itself is incredibly simple to use and equally as handy.
#6: BitTorrentYes it is true, Opera has a built-in BitTorrent protocol. And the built-in BitTorrent client is simple to use: Click on a Torrent link, and a dialog will open asking you where you want to download the file. The Torrent client is enabled by default, so if your company doesn’t allow Torrenting, you should probably disable this feature. Note: When downloading Torrents, you will continue to share content until you either stop the download or close the browser.
#7: Display modesAnother unique-to-Opera feature is its display modes, which allows you to quickly switch between Fit To Width and Full Screen mode. Fit To Width mode adjusts the page size to the available screen space while using flexible reformatting. Full Screen mode gives over the entire screen space to browsing. In this mode, you drop all menus and toolbars, leaving only context menus, mouse gestures, and keyboard shortcuts. The latter mode is especially good for smaller screens.
#8: Quick PreferencesThe Quick Preferences menu is one of those features the power user will really appreciate. I am quite often using it to enable/disable various features, and not having to open up the Preferences window makes for a much quicker experience. From this menu, you can alter preferences for pop-ups, images, Java/JavaScript, plug-ins, cookies, and proxies. This is perfect when you are one of those users who block cookies all the time, until a site comes along where you want to enable cookies.
#9: Mouse GesturesThis feature tends to bother most keyboard junkies (those who can’t stand to move their fingers from the keyboard.) But Mouse Gestures is a built-in feature that applies certain actions to specific mouse movements (or actions). For example, you can go back a page by holding down the right mouse button and clicking the left mouse button. This is pretty handy on a laptop, where using the track pad can take more time than you probably want to spend on navigation. But even for those who prefer to keep their hands on the keys and not the mouse, the feature can still save time. Instead of having to get to the mouse, move the mouse to the toolbar, and click a button, you simply have to get your hands to the mouse and make the gesture for the action to take place. Of course, this does require the memorization of the gestures.
#10: Session savingI love this feature. All too many times, I have needed to close a browser window but didn’t want to lose a page. To keep from losing the page, I would keep a temporary bookmark file where I could house these bookmarks. But with Opera, that’s history. If you have a page (or number of pages) you want to save, you just go to the File menu and then the Sessions submenu and click Save This Session. The next time you open Opera, the same tabs will open. You can also manage your saved sessions so that you can save multiple sessions and delete selected sessions.
The upshotWith just the above list, you can see how easily Opera separates itself from the rest of the crowd. It’s a different beast in the Web browsing space. It’s fast, stable, and cross platform, and it contains many features other browsers can’t touch.
10 things Linux does better than Windows
Throughout my 10+ years of using Linux, I have heard about everything that Windows does better than Linux. So I thought it time to shoot back and remind everyone of what Linux does better than Windows. Of course, being the zealot that I am, I could list far more than 10 items. But I will stick with the theme and list only what I deem to be the 10 areas where Linux not only does better than Windows but blows it out of the water.
#1: TCOThis can o’ worms has been, and will be, debated until both operating systems are no more. But let’s face it — the cost of a per-seat Windows license for a large company far outweighs having to bank on IT learning Linux. This is so for a couple of reasons.
First, most IT pros already know a thing or two about Linux. Second, today’s Linux is not your mother’s Linux. Linux has come a long, long way from where it was when I first started. Ten years ago, I would have said, hands down, Windows wins the TCO battle. But that was before KDE and GNOME brought their desktops to the point where any given group of monkeys could type Hamlet on a Linux box as quickly as they could type it on a Windows box. I bet any IT department could roll out Linux and do it in such a way that the end users would hardly know the difference. With KDE 4.1 leaps and bounds beyond 4.0, it’s already apparent where the Linux desktop is going — straight into the end users’ hands. So with all the FUD and rhetoric aside, Windows can’t compete with Linux in TCO. Add to that the cost of software prices (including antivirus and spyware protection) for Windows vs. Linux, and your IT budget just fell deeply into the red.
#2: DesktopYou can’t keep a straight face and say the Linux desktop is more difficult to use than the Windows desktop. If you can, you might want to check the release number of the Linux distribution you are using. Both GNOME and KDE have outpaced Windows for user-friendliness. Even KDE 4, which has altered the path of KDE quite a bit, will make any given user at home with the interface. But the Linux desktop beats the Windows desktop for more reasons than just user-friendliness. It’s far more flexible than anything Microsoft has ever released. If you don’t like the way the Linux desktop looks or behaves, change it. If you don’t like the desktop included with your distribution, add another. And what if, on rare occasion, the desktop locks up? Well, Windows might require a hard restart. Linux? Hit Ctrl + Alt + Backspace to force a logout of X Windows. Or you can always drop into a virtual console and kill the application that caused your desktop to freeze. It’s all about flexibility… something the Windows desktop does not enjoy.
#3: ServerFor anyone who thinks Windows has the server market cornered, I would ask you to wake up and join the 21st century. Linux can, and does, serve up anything and everything and does it easily and well. It’s fast, secure, easy to configure, and very scalable. And let’s say you don’t happen to be fond of Sendmail. If that’s the case you have plenty of alternatives to choose from. Even with serving up Web pages. There are plenty of alternatives to Apache, some of which are incredibly lightweight.
#4: SecurityRecently, there was a scare in the IT world known as Phalanx 2. It actually hit Linux. But the real issue was that it hit Linux servers that hadn’t been updated. It was poor administration that caused this little gem to get noticed. The patch, as usual in the Linux world, came nearly as soon as word got out. And that’s the rub. Security issues plague Windows for a couple of reasons: The operating system comes complete with plenty of security holes and Microsoft is slow to release patches for the holes. Of course, this is not to say that Linux is immune. It isn’t. But it is less susceptible to attacks and faster to fix problems.
#5: FlexibilityThis stems from the desktop but, because Linux is such an amazingly adaptable operating system, it’s wrong to confine flexibility to the desktop alone. Here’s the thing: With Linux, there is always more than one way to handle a task. Add to that the ability to get really creative with your problem solving, and you have the makings of a far superior system. Windows is about as inflexible as an operating system can be. Think about it this way: Out of the box, what can you do with Windows? You can surf the Web and get e-mail. Out of the box, what can you do with Linux? I think the better question is what can you NOT do with Linux? Linux is to Legos like Windows is to Lincoln Logs. With Lincoln Logs, you have the pieces to make fine log cabins. With Legos, you have the pieces to make, well, anything. And then you have all the fanboys making Star Wars Legos and Legos video games. Just where did all those Lincoln Logs fanboys go?
#6: Package managementReally, all I should have to say about this is that Windows does no package management. Sure, you can always install an application with a single click. But what if you don’t know which package you’re looking for? Where is the repository to search? Where are the various means of installing applications? Where are the dependency checks? Where are the md5 checks? What about not needing root access to install any application in Windows? Safety? Security? Sanity?
#7: CommunityAbout the only community for Windows is the flock of MCSEs, the denizens at the Microsoft campus, and the countless third-party software companies preying on those who can’t figure out what to do when Windows goes down for the count. Linux has always been and always will be about community. It was built by a community and for a community. And this Linux community is there to help those in need. From mailing lists to LUGs (Linux user groups) to forums to developers to Linus Torvalds himself (the creator of Linux), the Linux operating system is a community strong with users of all types, ages, nationalities, and social anxieties.
#8: InteroperabilityWindows plays REALLY well with Windows. Linux plays well with everyone. I’ve never met a system I couldn’t connect Linux to. That includes OS X, Windows, various Linux distributions, OS/2, Playstations… the list goes on and on. Without the help of third-party software, Windows isn’t nearly as interoperable. And we haven’t even touched on formats. With OpenOffice, you can open/save in nearly any format (regardless of release date). Have you come across that docx format yet? Had fun getting it to open in anything but MS Word >=2007?
#9: Command lineThis is another item where I shouldn’t have to say much more than the title. The Linux command line can do nearly anything you need to work in the Linux operating system. Yes, you need a bit of knowledge to do this, but the same holds true for the Windows command line. The biggest difference is the amount you can do when met with only the command line. If you had to administer two machines through the command line only (one Linux box and one Windows box), you would quickly understand just how superior the Linux CLI is to the vastly underpowered Windows CLI.
#10: EvolutionFor most users, Vista was a step backward. And that step backward took a long time (five years) to come to fruition. With most Linux distributions, new releases are made available every six months. And some of them are major jumps in technological advancement. Linux also listens to its community. What are they saying and what are they needing? From the kernel to the desktop, the Linux developer community is in sync with its users. Microsoft? Not so much. Microsoft takes its time to release what may or may not be an improvement. And, generally speaking, those Microsoft release dates are as far from set in stone as something can be. It should go without saying that Microsoft is not an agile developer. In fact, I would say Microsoft, in its arrogance, insists companies, users, and third-party developers evolve around it.
That’s my short list of big-ticket items that Linux does better than Windows. There will be those naysayers who feel differently, but I think most people will agree with these points. Of course, I am not so closed-minded as to think that there is nothing that Windows does better than Linux. I can think of a few off the top of my head: PR, marketing, FUD, games, crash, and USB scanners.
#1: TCOThis can o’ worms has been, and will be, debated until both operating systems are no more. But let’s face it — the cost of a per-seat Windows license for a large company far outweighs having to bank on IT learning Linux. This is so for a couple of reasons.
First, most IT pros already know a thing or two about Linux. Second, today’s Linux is not your mother’s Linux. Linux has come a long, long way from where it was when I first started. Ten years ago, I would have said, hands down, Windows wins the TCO battle. But that was before KDE and GNOME brought their desktops to the point where any given group of monkeys could type Hamlet on a Linux box as quickly as they could type it on a Windows box. I bet any IT department could roll out Linux and do it in such a way that the end users would hardly know the difference. With KDE 4.1 leaps and bounds beyond 4.0, it’s already apparent where the Linux desktop is going — straight into the end users’ hands. So with all the FUD and rhetoric aside, Windows can’t compete with Linux in TCO. Add to that the cost of software prices (including antivirus and spyware protection) for Windows vs. Linux, and your IT budget just fell deeply into the red.
#2: DesktopYou can’t keep a straight face and say the Linux desktop is more difficult to use than the Windows desktop. If you can, you might want to check the release number of the Linux distribution you are using. Both GNOME and KDE have outpaced Windows for user-friendliness. Even KDE 4, which has altered the path of KDE quite a bit, will make any given user at home with the interface. But the Linux desktop beats the Windows desktop for more reasons than just user-friendliness. It’s far more flexible than anything Microsoft has ever released. If you don’t like the way the Linux desktop looks or behaves, change it. If you don’t like the desktop included with your distribution, add another. And what if, on rare occasion, the desktop locks up? Well, Windows might require a hard restart. Linux? Hit Ctrl + Alt + Backspace to force a logout of X Windows. Or you can always drop into a virtual console and kill the application that caused your desktop to freeze. It’s all about flexibility… something the Windows desktop does not enjoy.
#3: ServerFor anyone who thinks Windows has the server market cornered, I would ask you to wake up and join the 21st century. Linux can, and does, serve up anything and everything and does it easily and well. It’s fast, secure, easy to configure, and very scalable. And let’s say you don’t happen to be fond of Sendmail. If that’s the case you have plenty of alternatives to choose from. Even with serving up Web pages. There are plenty of alternatives to Apache, some of which are incredibly lightweight.
#4: SecurityRecently, there was a scare in the IT world known as Phalanx 2. It actually hit Linux. But the real issue was that it hit Linux servers that hadn’t been updated. It was poor administration that caused this little gem to get noticed. The patch, as usual in the Linux world, came nearly as soon as word got out. And that’s the rub. Security issues plague Windows for a couple of reasons: The operating system comes complete with plenty of security holes and Microsoft is slow to release patches for the holes. Of course, this is not to say that Linux is immune. It isn’t. But it is less susceptible to attacks and faster to fix problems.
#5: FlexibilityThis stems from the desktop but, because Linux is such an amazingly adaptable operating system, it’s wrong to confine flexibility to the desktop alone. Here’s the thing: With Linux, there is always more than one way to handle a task. Add to that the ability to get really creative with your problem solving, and you have the makings of a far superior system. Windows is about as inflexible as an operating system can be. Think about it this way: Out of the box, what can you do with Windows? You can surf the Web and get e-mail. Out of the box, what can you do with Linux? I think the better question is what can you NOT do with Linux? Linux is to Legos like Windows is to Lincoln Logs. With Lincoln Logs, you have the pieces to make fine log cabins. With Legos, you have the pieces to make, well, anything. And then you have all the fanboys making Star Wars Legos and Legos video games. Just where did all those Lincoln Logs fanboys go?
#6: Package managementReally, all I should have to say about this is that Windows does no package management. Sure, you can always install an application with a single click. But what if you don’t know which package you’re looking for? Where is the repository to search? Where are the various means of installing applications? Where are the dependency checks? Where are the md5 checks? What about not needing root access to install any application in Windows? Safety? Security? Sanity?
#7: CommunityAbout the only community for Windows is the flock of MCSEs, the denizens at the Microsoft campus, and the countless third-party software companies preying on those who can’t figure out what to do when Windows goes down for the count. Linux has always been and always will be about community. It was built by a community and for a community. And this Linux community is there to help those in need. From mailing lists to LUGs (Linux user groups) to forums to developers to Linus Torvalds himself (the creator of Linux), the Linux operating system is a community strong with users of all types, ages, nationalities, and social anxieties.
#8: InteroperabilityWindows plays REALLY well with Windows. Linux plays well with everyone. I’ve never met a system I couldn’t connect Linux to. That includes OS X, Windows, various Linux distributions, OS/2, Playstations… the list goes on and on. Without the help of third-party software, Windows isn’t nearly as interoperable. And we haven’t even touched on formats. With OpenOffice, you can open/save in nearly any format (regardless of release date). Have you come across that docx format yet? Had fun getting it to open in anything but MS Word >=2007?
#9: Command lineThis is another item where I shouldn’t have to say much more than the title. The Linux command line can do nearly anything you need to work in the Linux operating system. Yes, you need a bit of knowledge to do this, but the same holds true for the Windows command line. The biggest difference is the amount you can do when met with only the command line. If you had to administer two machines through the command line only (one Linux box and one Windows box), you would quickly understand just how superior the Linux CLI is to the vastly underpowered Windows CLI.
#10: EvolutionFor most users, Vista was a step backward. And that step backward took a long time (five years) to come to fruition. With most Linux distributions, new releases are made available every six months. And some of them are major jumps in technological advancement. Linux also listens to its community. What are they saying and what are they needing? From the kernel to the desktop, the Linux developer community is in sync with its users. Microsoft? Not so much. Microsoft takes its time to release what may or may not be an improvement. And, generally speaking, those Microsoft release dates are as far from set in stone as something can be. It should go without saying that Microsoft is not an agile developer. In fact, I would say Microsoft, in its arrogance, insists companies, users, and third-party developers evolve around it.
That’s my short list of big-ticket items that Linux does better than Windows. There will be those naysayers who feel differently, but I think most people will agree with these points. Of course, I am not so closed-minded as to think that there is nothing that Windows does better than Linux. I can think of a few off the top of my head: PR, marketing, FUD, games, crash, and USB scanners.
10 ways to learn new skills on the cheap
The one thing we know for sure about IT is that the technology is constantly changing. Staying current with that technology, and acquiring the skills to support it, is a career necessity. Whether you simply need to learn the latest techniques or you want to completely retool, if your employer or client does not fund the training, it could be very expensive for you. Fortunately, there are some low/no-cost alternatives to conventional training programs that might even be more effective and be a better fit for your learning style.
#1: Public libraryAs obvious as this resource is, I am always surprised at how many people never think of it. Though some of the material may not be the latest, you might be surprised, especially if you have access to a fairly large metropolitan library. Do not forget about videos and DVDs either, especially for training on less technical, common applications, such as QuickBooks or Microsoft Access. If you are looking for business or methodology training, you may also want to look for audio books. You may not be able to find detailed information on the Rational Unified Process (RUP), but Six Sigma and other initiatives in which your company or client may be involved may well be there. Audio books also enable you to convert idle drive time, or exercise time, into a value-add for you and your client.
If you are stuck in a small town with limited resources, consider approaching a larger library system to become a guest patron. Many times this is available to the public for a fee, but your local library may also have a reciprocal agreement with them, in which case access to the other library system may be free. Also, if you do teaching at a school of any type, you may be granted access to a library system if you can show proof of your status as a teacher.
University libraries are another rich store of material from which you can learn new skills. But unless you are a student at the school, it may be less than straightforward to check out materials. If the university is state-funded, you might be permitted to check out material if you are a resident of the state. If the university you approach does not permit you to check out material, you can always make a routine of camping out there for a couple of hours each week and learning on the premises.
#2: Company library/resourcesMany companies have their own libraries and training that are available for the asking. Training is usually a part of human resources, so you might start there if the company doesn’t have a formal training department. If you are an independent consultant, does your client have a library you could tap into? It has been my experience that clients are generally quite willing to open up their training to outside consultants, especially if the training makes the consultants more effective in working with them.
If there is a cost associated with the training, however, reimbursement can be complicated, as clients usually lack a process for accepting that type of payment. Very large companies have particularly difficult time accepting money for training, but do not give up. Your client’s department may still be willing to carry your training if they see a material benefit.
#3: Vendor trainingIt is to a vendor’s advantage to have you use their product, and use it effectively. To that end, many vendors offer training for little or no cost. This training is made available in a variety of formats, including:
Training sessions at conferences and trade fairs
White papers
Online tutorials
Online/on-demand videos
Special training events
You will not find a five-day intensive training session available for free, but you can still learn quite a bit from these free vendor resources. The more prepared you go into a vendor’s event, including being armed with questions, the more you will gain from the experience.
#4: PodcastsPodcasts are becoming increasingly popular among the typical channel of technical media and vendors. They include product information or interviews with experts in a particular field and tend to cover fairly narrow topics, such as the software quality topics offered by StickyMinds. There are also a number of resources from more public sources, such as iPod and YouTube. These may come from a number of academic sources, or they may be the product of someone who simply has a passion for the subject.
#5: Webinars/webcasts and virtual trade showsOne of the greatest developments for people who actually have to work for a living, webinars and virtual trade shows offer a no-travel way to accomplish in an hour what used to take an entire day. Virtual trade shows are not as well attended by vendors as live trade shows, but as vendors figure out how to use the new venue, I expect more will start to join in. Advantages, besides the obvious lack of travel and enormous time savings, include having a fairly narrow topic focus and relatively easy access to representatives. There are also some pretty awesome networking opportunities, as well.
Webinars usually consist of an industry expert providing general information, followed by product information from the sponsoring vendor. The product typically has some tie to the overall topic, and many times, the product information portion of the webinar may be as informative as the general topic portion. If the sponsor has a broader interest in the industry, such as an association or a publisher, the entire webinar may be information-oriented, with no product application.
Various webcasts can be found at TechRepublic, as well as at other publishers.
#6: Associations and user groupsNational organizations typically have a number of resources that you, as a member, can participate in. These may include online libraries, peer forums, and training courses. There may be a cost associated with some of this training, and access to some of the resources may require a paid, or premium (read: more expensive), membership. But when you consider that a membership to the Association of Computing Machinery, for example, can give you access to more than 1,100 books online, in addition to their journals and proceedings, it might well be worth the annual membership fee.
User groups, or other local groups that share your interest in a particular topic, offer a great forum to learn and share information for little or no cost. Special interest groups (SIGs) within the user group offer further topic specialization and can be a tremendous way to learn or be mentored. Check with vendors that interest you, as they may maintain a list of user groups in your area that relate to your product. Microsoft, for example, has a site with user group information, as do other major manufacturers. Consider, also, simple word of mouth and the “community calendar” section of your local paper to find out about upcoming meetings of groups that may interest you.
#7: VolunteeringThe best way to learn is by doing. However, most companies are not willing to pay you while you learn. If you have all of the books and tutorials, but just need to get your hands dirty, why not volunteer to do a project for someone for free? Churches and nonprofits might need some work done that you can help with. A new Web site, a donor tracking system, or automation of monthly billing are all things that might benefit them and can give you the hands-on experience you need to approach a prospective employer or client. This is an especially good approach if you are trying to retool yourself with some new technology, or least a technology that is new to you.
This same approach can be applied in an incremental fashion with existing work you may be doing. Can you work a little beyond your current job description? If you are working within an old development methodology, for example, but want to try what you have learned about RUP, redo a portion of your work in the style of the new methodology, such as use cases. There is nothing like trying a skill on a real project to give you a real sense of the process, and sometimes a real sense of how much you still need to learn. Who knows — besides getting some great experience, you might even start to convert your team to the new process (but don’t get your hopes up).
#8: The InternetWho has not Googled to learn more on a topic or to clear up an office dispute on the origins of some phrase or song lyric? This same resource is a great learning tool. A simple topic search can produce content from college courses, vendor training, and government information sites. Don’t be surprised if some of this content offers better explanations than some text books.
Online publishers are another great source for information to enhance your skills. Consider dropping a topic that interests you into the search field at a site such as DevX, and you may be surprised how much detail you will find.
#9: Continuing educationContinuing education programs, also called adult education or community outreach, offer nondegree classes that are generally conducted in the evening for a modest fee. Besides the stereotypic class on how to weave a basket, many programs also offer database, networking, and a number of other technology classes. Many of these programs are run through high schools and colleges, so if you are not aware of any programs in your area, start by checking with your local high school, career center, or university for contact information.
#10: Community collegeState-run community colleges generally offer a number of affordable classes you can take without seeking a degree. Many of these colleges offer technology and programming classes. Because you have probably not taken the prerequisites for the class, you may need the permission of the instructor, but that should not be a problem if you are already a professional in the field. These programs are usually far less expensive than your typical week-long vendor training and are usually scheduled during the evening to minimize the impact on your workday. There may also be for-profit community colleges in your area. But since they may lack public subsidies, be prepared to pay substantially more for their course offerings.
One less hurdleLimited time, family demands, and travel may still keep you from dedicating to learning a new skill, but if you’re creative, cost doesn’t have to be an obstacle. In fact, the nature of some of these suggested training alternatives lend themselves nicely to working around the time and travel constraints that are so often a barrier. Take advantage of as many of these training approaches as you can, and you will have one less hurdle to moving your career forward.
#1: Public libraryAs obvious as this resource is, I am always surprised at how many people never think of it. Though some of the material may not be the latest, you might be surprised, especially if you have access to a fairly large metropolitan library. Do not forget about videos and DVDs either, especially for training on less technical, common applications, such as QuickBooks or Microsoft Access. If you are looking for business or methodology training, you may also want to look for audio books. You may not be able to find detailed information on the Rational Unified Process (RUP), but Six Sigma and other initiatives in which your company or client may be involved may well be there. Audio books also enable you to convert idle drive time, or exercise time, into a value-add for you and your client.
If you are stuck in a small town with limited resources, consider approaching a larger library system to become a guest patron. Many times this is available to the public for a fee, but your local library may also have a reciprocal agreement with them, in which case access to the other library system may be free. Also, if you do teaching at a school of any type, you may be granted access to a library system if you can show proof of your status as a teacher.
University libraries are another rich store of material from which you can learn new skills. But unless you are a student at the school, it may be less than straightforward to check out materials. If the university is state-funded, you might be permitted to check out material if you are a resident of the state. If the university you approach does not permit you to check out material, you can always make a routine of camping out there for a couple of hours each week and learning on the premises.
#2: Company library/resourcesMany companies have their own libraries and training that are available for the asking. Training is usually a part of human resources, so you might start there if the company doesn’t have a formal training department. If you are an independent consultant, does your client have a library you could tap into? It has been my experience that clients are generally quite willing to open up their training to outside consultants, especially if the training makes the consultants more effective in working with them.
If there is a cost associated with the training, however, reimbursement can be complicated, as clients usually lack a process for accepting that type of payment. Very large companies have particularly difficult time accepting money for training, but do not give up. Your client’s department may still be willing to carry your training if they see a material benefit.
#3: Vendor trainingIt is to a vendor’s advantage to have you use their product, and use it effectively. To that end, many vendors offer training for little or no cost. This training is made available in a variety of formats, including:
Training sessions at conferences and trade fairs
White papers
Online tutorials
Online/on-demand videos
Special training events
You will not find a five-day intensive training session available for free, but you can still learn quite a bit from these free vendor resources. The more prepared you go into a vendor’s event, including being armed with questions, the more you will gain from the experience.
#4: PodcastsPodcasts are becoming increasingly popular among the typical channel of technical media and vendors. They include product information or interviews with experts in a particular field and tend to cover fairly narrow topics, such as the software quality topics offered by StickyMinds. There are also a number of resources from more public sources, such as iPod and YouTube. These may come from a number of academic sources, or they may be the product of someone who simply has a passion for the subject.
#5: Webinars/webcasts and virtual trade showsOne of the greatest developments for people who actually have to work for a living, webinars and virtual trade shows offer a no-travel way to accomplish in an hour what used to take an entire day. Virtual trade shows are not as well attended by vendors as live trade shows, but as vendors figure out how to use the new venue, I expect more will start to join in. Advantages, besides the obvious lack of travel and enormous time savings, include having a fairly narrow topic focus and relatively easy access to representatives. There are also some pretty awesome networking opportunities, as well.
Webinars usually consist of an industry expert providing general information, followed by product information from the sponsoring vendor. The product typically has some tie to the overall topic, and many times, the product information portion of the webinar may be as informative as the general topic portion. If the sponsor has a broader interest in the industry, such as an association or a publisher, the entire webinar may be information-oriented, with no product application.
Various webcasts can be found at TechRepublic, as well as at other publishers.
#6: Associations and user groupsNational organizations typically have a number of resources that you, as a member, can participate in. These may include online libraries, peer forums, and training courses. There may be a cost associated with some of this training, and access to some of the resources may require a paid, or premium (read: more expensive), membership. But when you consider that a membership to the Association of Computing Machinery, for example, can give you access to more than 1,100 books online, in addition to their journals and proceedings, it might well be worth the annual membership fee.
User groups, or other local groups that share your interest in a particular topic, offer a great forum to learn and share information for little or no cost. Special interest groups (SIGs) within the user group offer further topic specialization and can be a tremendous way to learn or be mentored. Check with vendors that interest you, as they may maintain a list of user groups in your area that relate to your product. Microsoft, for example, has a site with user group information, as do other major manufacturers. Consider, also, simple word of mouth and the “community calendar” section of your local paper to find out about upcoming meetings of groups that may interest you.
#7: VolunteeringThe best way to learn is by doing. However, most companies are not willing to pay you while you learn. If you have all of the books and tutorials, but just need to get your hands dirty, why not volunteer to do a project for someone for free? Churches and nonprofits might need some work done that you can help with. A new Web site, a donor tracking system, or automation of monthly billing are all things that might benefit them and can give you the hands-on experience you need to approach a prospective employer or client. This is an especially good approach if you are trying to retool yourself with some new technology, or least a technology that is new to you.
This same approach can be applied in an incremental fashion with existing work you may be doing. Can you work a little beyond your current job description? If you are working within an old development methodology, for example, but want to try what you have learned about RUP, redo a portion of your work in the style of the new methodology, such as use cases. There is nothing like trying a skill on a real project to give you a real sense of the process, and sometimes a real sense of how much you still need to learn. Who knows — besides getting some great experience, you might even start to convert your team to the new process (but don’t get your hopes up).
#8: The InternetWho has not Googled to learn more on a topic or to clear up an office dispute on the origins of some phrase or song lyric? This same resource is a great learning tool. A simple topic search can produce content from college courses, vendor training, and government information sites. Don’t be surprised if some of this content offers better explanations than some text books.
Online publishers are another great source for information to enhance your skills. Consider dropping a topic that interests you into the search field at a site such as DevX, and you may be surprised how much detail you will find.
#9: Continuing educationContinuing education programs, also called adult education or community outreach, offer nondegree classes that are generally conducted in the evening for a modest fee. Besides the stereotypic class on how to weave a basket, many programs also offer database, networking, and a number of other technology classes. Many of these programs are run through high schools and colleges, so if you are not aware of any programs in your area, start by checking with your local high school, career center, or university for contact information.
#10: Community collegeState-run community colleges generally offer a number of affordable classes you can take without seeking a degree. Many of these colleges offer technology and programming classes. Because you have probably not taken the prerequisites for the class, you may need the permission of the instructor, but that should not be a problem if you are already a professional in the field. These programs are usually far less expensive than your typical week-long vendor training and are usually scheduled during the evening to minimize the impact on your workday. There may also be for-profit community colleges in your area. But since they may lack public subsidies, be prepared to pay substantially more for their course offerings.
One less hurdleLimited time, family demands, and travel may still keep you from dedicating to learning a new skill, but if you’re creative, cost doesn’t have to be an obstacle. In fact, the nature of some of these suggested training alternatives lend themselves nicely to working around the time and travel constraints that are so often a barrier. Take advantage of as many of these training approaches as you can, and you will have one less hurdle to moving your career forward.
10+ tips for combating Computer Vision Syndrome
If you spend two or more hours a day in front of a computer, you might suffer from Computer Vision Syndrome (CVS). Symptoms include headache, inability to focus, burning or tired eyes, double or blurred vision, and neck and shoulder pain.
Computer screens are the culprit. Our eyes don’t process screen characters as well as they do traditional print. Printed materials have well-defined edges and screen characters don’t. Our eyes work hard to remain focused on screen characters and to temporarily relieve stress, our eyes drift and then strain to refocus. The constant muscle flexing causes fatigue. Keep in mind that computer screens aren’t the only screens that matter — most of your electronic toys, such as cell phones and PDAs, also cause eyestrain.
Fortunately, there are a number of simple (and mostly free) things you can do to alleviate CVS. Don’t wait until you’re suffering. Make these adjustments now.
#1: Use proper lightingMost office settings use bright, often harsh lighting. The more light the better, right? Unfortunately, that’s not true, but the solution to harsh bright lights is simple. Knowing that the bright lights are hurting you is often the bigger problem.
If you have a window, use blinds or curtains to limit the amount of sunlight beaming in. Use lower intensity bulbs and tubes inside. If you have both, turn off the indoor lights and open your blinds or curtains until you’re comfortable.
If you’re used to working in bright light, you might feel a bit out of sorts at first. Give yourself some time to adjust to the softer lighting. If you can’t control the lighting, consider wearing tinted glasses.
#2: Reduce environmental glareGlare is reflected light that bounces off surfaces such as walls and computer screens. Often, you don’t even realize you’re compensating for it, so finding glare might take a bit of effort. There are a few things that you can do to reduce the glare:
Paint bright walls a darker color and use paint with a matte finish.
Install an anti-glare screen and/or a glare hood on your monitor.
If you wear glasses, consider applying an anti-reflective coating to the lenses.
Glare screens help only part of the problem. They cut down on glare from the computer screen. Unfortunately, they won’t help your eyes focus better.
#3: Use proper computer settingsOne of the simplest ways to reduce eyestrain is to adjust your monitor’s brightness and contrast settings. There’s no right or wrong setting. Just experiment until you’re comfortable.
If the background gives off a lot of light, reduce the brightness. In addition, keep the contrast between the background and characters high. Generally speaking, your settings are probably too bright, but a setting that’s too dark is just as tiring.
#4: Maximize comfort by adjusting text size and colorAdjusting the on-screen text’s size and color can provide relief. First, try enlarging the text. You’re probably using the smallest size you can to view more text on the screen, but that compounds the problem. Instead, enlarge the text to two to three times the smallest size you can read. Almost all software and most browsers will let you adjust text size. When possible, use black text on a white background. And avoid busy backgrounds. Sometimes, you have no control, but do so when you can.
#5: Take a break!
The AOA also suggests you follow their 20/20 rule when regular breaks just aren’t possible. Every 20 minutes or so, look away from the screen and focus on something in the distance for about 20 seconds.
(Breaks can be a touchy subject in the workplace, so discuss your needs with a supervisor. Don’t get yourself into trouble.)
#6: Clean your screenThe easiest tip of all is to clean your screen frequently. Dust, fingerprints, and other smears are distracting and make reading more difficult. Often, you don’t even see the dust; you just look right past it. Make it a habit to wipe off your screen frequently. Every morning isn’t too often and is easy to work into your routine.
#7: Position copy correctlyGlancing back and forth between a printed copy and your computer screen causes eyestrain. To ease discomfort, place the printed copy as close to your monitor as possible. In addition, use a copy stand if possible to keep the copy upright.
This is the one time you might want more light. A small desk lamp will suit your needs, but position it carefully so that it sheds light on the printed page but doesn’t shine into your face or reflect off your monitor. Remember to use soft light.
#8: Position yourself correctlyKeep your distance from the monitor; most people sit too close. Position your computer monitor about 20 to 24 inches from your eyes. Your screen’s center should be about 10 to 15 degrees below your eyes. This arrangement provides the best support.
If you can’t change the distance between you and the monitor, adjust the text accordingly. For instance, if you’re sitting farther away than you should, increase the text size. It’s not the best solution, but it’s better than straining to see something that’s too far away.
#9: Get computer glassesIf you just can’t get relief, you might need special glasses you can wear just for working at the computer. You can’t pick these at your favorite discount store. You’ll need a prescription from an eye doctor.
Don’t depend on prescription reading glasses to negate CVS either. Reading glasses help with distances of 16 to 21 inches. In contrast, computer glasses work for distances of 18 to 28 inches. It’s unlikely that the same pair of glasses will accommodate reading printed material and working at your computer.
#10: Seek alternative helpIf all else fails, try something a little different, like yoga. In an Indian study of 291 people, half practiced yoga daily for an hour, five days a week, and noticed an improvement after 60 days. The other half, those not practicing yoga, saw no improvement. If your eyestrain doesn’t disappear, at least you’ll have fun and feel better in general.
Computer screens are the culprit. Our eyes don’t process screen characters as well as they do traditional print. Printed materials have well-defined edges and screen characters don’t. Our eyes work hard to remain focused on screen characters and to temporarily relieve stress, our eyes drift and then strain to refocus. The constant muscle flexing causes fatigue. Keep in mind that computer screens aren’t the only screens that matter — most of your electronic toys, such as cell phones and PDAs, also cause eyestrain.
Fortunately, there are a number of simple (and mostly free) things you can do to alleviate CVS. Don’t wait until you’re suffering. Make these adjustments now.
#1: Use proper lightingMost office settings use bright, often harsh lighting. The more light the better, right? Unfortunately, that’s not true, but the solution to harsh bright lights is simple. Knowing that the bright lights are hurting you is often the bigger problem.
If you have a window, use blinds or curtains to limit the amount of sunlight beaming in. Use lower intensity bulbs and tubes inside. If you have both, turn off the indoor lights and open your blinds or curtains until you’re comfortable.
If you’re used to working in bright light, you might feel a bit out of sorts at first. Give yourself some time to adjust to the softer lighting. If you can’t control the lighting, consider wearing tinted glasses.
#2: Reduce environmental glareGlare is reflected light that bounces off surfaces such as walls and computer screens. Often, you don’t even realize you’re compensating for it, so finding glare might take a bit of effort. There are a few things that you can do to reduce the glare:
Paint bright walls a darker color and use paint with a matte finish.
Install an anti-glare screen and/or a glare hood on your monitor.
If you wear glasses, consider applying an anti-reflective coating to the lenses.
Glare screens help only part of the problem. They cut down on glare from the computer screen. Unfortunately, they won’t help your eyes focus better.
#3: Use proper computer settingsOne of the simplest ways to reduce eyestrain is to adjust your monitor’s brightness and contrast settings. There’s no right or wrong setting. Just experiment until you’re comfortable.
If the background gives off a lot of light, reduce the brightness. In addition, keep the contrast between the background and characters high. Generally speaking, your settings are probably too bright, but a setting that’s too dark is just as tiring.
#4: Maximize comfort by adjusting text size and colorAdjusting the on-screen text’s size and color can provide relief. First, try enlarging the text. You’re probably using the smallest size you can to view more text on the screen, but that compounds the problem. Instead, enlarge the text to two to three times the smallest size you can read. Almost all software and most browsers will let you adjust text size. When possible, use black text on a white background. And avoid busy backgrounds. Sometimes, you have no control, but do so when you can.
#5: Take a break!
The AOA also suggests you follow their 20/20 rule when regular breaks just aren’t possible. Every 20 minutes or so, look away from the screen and focus on something in the distance for about 20 seconds.
(Breaks can be a touchy subject in the workplace, so discuss your needs with a supervisor. Don’t get yourself into trouble.)
#6: Clean your screenThe easiest tip of all is to clean your screen frequently. Dust, fingerprints, and other smears are distracting and make reading more difficult. Often, you don’t even see the dust; you just look right past it. Make it a habit to wipe off your screen frequently. Every morning isn’t too often and is easy to work into your routine.
#7: Position copy correctlyGlancing back and forth between a printed copy and your computer screen causes eyestrain. To ease discomfort, place the printed copy as close to your monitor as possible. In addition, use a copy stand if possible to keep the copy upright.
This is the one time you might want more light. A small desk lamp will suit your needs, but position it carefully so that it sheds light on the printed page but doesn’t shine into your face or reflect off your monitor. Remember to use soft light.
#8: Position yourself correctlyKeep your distance from the monitor; most people sit too close. Position your computer monitor about 20 to 24 inches from your eyes. Your screen’s center should be about 10 to 15 degrees below your eyes. This arrangement provides the best support.
If you can’t change the distance between you and the monitor, adjust the text accordingly. For instance, if you’re sitting farther away than you should, increase the text size. It’s not the best solution, but it’s better than straining to see something that’s too far away.
#9: Get computer glassesIf you just can’t get relief, you might need special glasses you can wear just for working at the computer. You can’t pick these at your favorite discount store. You’ll need a prescription from an eye doctor.
Don’t depend on prescription reading glasses to negate CVS either. Reading glasses help with distances of 16 to 21 inches. In contrast, computer glasses work for distances of 18 to 28 inches. It’s unlikely that the same pair of glasses will accommodate reading printed material and working at your computer.
#10: Seek alternative helpIf all else fails, try something a little different, like yoga. In an Indian study of 291 people, half practiced yoga daily for an hour, five days a week, and noticed an improvement after 60 days. The other half, those not practicing yoga, saw no improvement. If your eyestrain doesn’t disappear, at least you’ll have fun and feel better in general.
10 ways to survive office politics
Office politics will never go away. It’s a fact of company life. However, destructive office politics can demoralize an organization, hamper productivity, and increase turnover. Here are some tips, applicable for both staff and management, on dealing with office politics.
#1: Live at peace with othersThe easiest way to avoid problems with politics is to get along with people. I’m not saying you need to hug everyone and sing songs, and I’m not saying you have to be a pushover for everyone. You can be pleasant and professional, while at the same time being assertive when necessary. If you have a concern, focus only on the issue, not on the person. If you have to refuse a request, explain why and try to come up with alternative solutions.
Living at peace with others also means being careful about choosing sides during office power struggles. Aligning yourself with one faction or the other will prevent you from working effectively with people from the “other” side, thereby hampering your productivity and thus your performance. It’s even worse if “your” faction loses out. Instead, try to focus on your tasks, dealing with people in either faction on the basis of the tasks alone, and avoid talk on the political issue that separates the groups.
#2: Don’t talk out of school
Does your organization have issues? Have people told you things in confidence? Then keep those matters to yourself. Talking to outsiders about issues within your organization makes all of you look bad to that outsider. Furthermore, your boss or your boss’s boss will not appreciate that behavior. People will find out that you spoke about what they told you, and they’ll lose confidence in you and respect for you.
#3: Be helpfulWe all have responsibilities and objectives, and those things should receive priority. Nonetheless, if it doesn’t take too much time, being helpful to others can reap benefits for you. Does someone need a ride in the direction you live? Did your co-worker leave headlights on in the parking lot? Is someone having trouble building an Excel macro? If you can help that person, especially if you can do so without taking too much of your time, you benefit yourself as well as the other person. By doing these things, you’re building political capital and loyalty. In doing so, you reduce the chances that you will be the victim of political intrigue.
#4: Stay away from gossip
Nothing destroys the dynamics of an office more than gossip. Stay away from it, because nothing good comes from it. Just be sure you avoid the “holier than thou” attitude of lecturing your co-workers on the evils of gossip. You’ll make them lose face, and they’ll resent you. Instead, try subtly changing the subject. For example, suppose the group is talking about Jane’s problems with her child, and of course Jane is absent from the group. Do some free association and try to come up with some topic that’s related to Jane or her child, but won’t involve gossip. Then, make a comment about that topic.
For instance, suppose you know that Jane’s child is involved in a sports league. Mention this fact, thereby linking the child and the league. Then, shift the conversation so that you’re now talking about the league rather than Jane’s child. You could ask when schedules will be published, or if they need parent volunteers. If you do it right, no one will even notice that you’ve moved them away from the gossip.
#5: Stay out of those talk-down-the-boss sessionsSuppose your co-workers start complaining about the boss. If you join in, it makes you look disloyal to the boss. If you don’t, it looks awkward in the group. What can you do? As with the situation of gossip, try changing the subject by linking the boss to another topic, then talking about that topic instead. Or you could simply respond to your co-workers with a smile and a tongue-in-cheek, “Come on, aren’t we exaggerating? [name of boss] really isn’t THAT bad.” Be careful, though, because it could be taken as an admission by you that the boss is bad.
#6: Be a straight arrowThe best way to keep out of trouble politically is to be seen as someone who doesn’t play office politics — in other words, a straight arrow. Do what you say you’re going to do, alert people to problems, and admit your mistakes. Others will respect you, even if they don’t always agree with you. More important, you have a lower chance of being a victim of politics.
#7: Address the “politics” issue openly when appropriateMany times, when I do organizational assessments, I sense anxiety on the part of client staff. To address this anxiety, I tell people I interview that I’m not there to get people fired. I’m there to help the organization function better. It might not completely allay their fears and suspicions, but at least I’ve brought up the issue and addressed it.
Think about doing the same thing if you believe politics is an underlying theme at your company. Tell people you’re not interested in scoring political points but only in getting the job done. It might not work, but unless you bring the matter up, there’s no chance at all that they will believe you. So if a co-worker is unavailable, and you have to act on that person’s behalf, consider saying to that person, “I had to act because of your absence. I wasn’t trying to go behind your back and I wasn’t trying to show you up.”
#8: Document thingsNothing saves a job or career more than having a written record. If you believe a matter will come back to haunt you, make sure you keep a record of the matter, either via e-mail or document. Documentation is also an effective way to highlight of your own accomplishments, which can help you when your performance evaluation is conducted.
#9: Set incentives to foster teamworkIf you’re a manager or senior executive, take a close look at your incentives. Are you unwittingly setting up your staff to work against each other? Do your metrics address only individual departments, or do they also address how departments could benefit the larger organization?
For example, suppose the hardware department of Sears reduced all its prices by half. If you measured only profitability of the department, you would conclude that it is performing horribly. However, that measurement would neglect to account for increased volume in all other departments because of the hardware department.
If you reward employees in a department based only on how well that department does, you may inadvertently cause destructive competition among departments. Each one will be competing against every other one, and all the departments could end up in a worse position. To minimize this possibility, give employees incentives based not only on department results but on organization results as well. That way, employees from different departments have more motivation to work together and less motivation to engage in destructive politics.
#10: Set an example for your staffPeople in an organization look to leadership to see how to act. Do you want your staff to refrain from negative politics? Do you want to see collaboration and teamwork instead of petty rivalries, jealousy, and back-stabbing? Act the way you want your staff to act, and they will follow you.
#1: Live at peace with othersThe easiest way to avoid problems with politics is to get along with people. I’m not saying you need to hug everyone and sing songs, and I’m not saying you have to be a pushover for everyone. You can be pleasant and professional, while at the same time being assertive when necessary. If you have a concern, focus only on the issue, not on the person. If you have to refuse a request, explain why and try to come up with alternative solutions.
Living at peace with others also means being careful about choosing sides during office power struggles. Aligning yourself with one faction or the other will prevent you from working effectively with people from the “other” side, thereby hampering your productivity and thus your performance. It’s even worse if “your” faction loses out. Instead, try to focus on your tasks, dealing with people in either faction on the basis of the tasks alone, and avoid talk on the political issue that separates the groups.
#2: Don’t talk out of school
Does your organization have issues? Have people told you things in confidence? Then keep those matters to yourself. Talking to outsiders about issues within your organization makes all of you look bad to that outsider. Furthermore, your boss or your boss’s boss will not appreciate that behavior. People will find out that you spoke about what they told you, and they’ll lose confidence in you and respect for you.
#3: Be helpfulWe all have responsibilities and objectives, and those things should receive priority. Nonetheless, if it doesn’t take too much time, being helpful to others can reap benefits for you. Does someone need a ride in the direction you live? Did your co-worker leave headlights on in the parking lot? Is someone having trouble building an Excel macro? If you can help that person, especially if you can do so without taking too much of your time, you benefit yourself as well as the other person. By doing these things, you’re building political capital and loyalty. In doing so, you reduce the chances that you will be the victim of political intrigue.
#4: Stay away from gossip
Nothing destroys the dynamics of an office more than gossip. Stay away from it, because nothing good comes from it. Just be sure you avoid the “holier than thou” attitude of lecturing your co-workers on the evils of gossip. You’ll make them lose face, and they’ll resent you. Instead, try subtly changing the subject. For example, suppose the group is talking about Jane’s problems with her child, and of course Jane is absent from the group. Do some free association and try to come up with some topic that’s related to Jane or her child, but won’t involve gossip. Then, make a comment about that topic.
For instance, suppose you know that Jane’s child is involved in a sports league. Mention this fact, thereby linking the child and the league. Then, shift the conversation so that you’re now talking about the league rather than Jane’s child. You could ask when schedules will be published, or if they need parent volunteers. If you do it right, no one will even notice that you’ve moved them away from the gossip.
#5: Stay out of those talk-down-the-boss sessionsSuppose your co-workers start complaining about the boss. If you join in, it makes you look disloyal to the boss. If you don’t, it looks awkward in the group. What can you do? As with the situation of gossip, try changing the subject by linking the boss to another topic, then talking about that topic instead. Or you could simply respond to your co-workers with a smile and a tongue-in-cheek, “Come on, aren’t we exaggerating? [name of boss] really isn’t THAT bad.” Be careful, though, because it could be taken as an admission by you that the boss is bad.
#6: Be a straight arrowThe best way to keep out of trouble politically is to be seen as someone who doesn’t play office politics — in other words, a straight arrow. Do what you say you’re going to do, alert people to problems, and admit your mistakes. Others will respect you, even if they don’t always agree with you. More important, you have a lower chance of being a victim of politics.
#7: Address the “politics” issue openly when appropriateMany times, when I do organizational assessments, I sense anxiety on the part of client staff. To address this anxiety, I tell people I interview that I’m not there to get people fired. I’m there to help the organization function better. It might not completely allay their fears and suspicions, but at least I’ve brought up the issue and addressed it.
Think about doing the same thing if you believe politics is an underlying theme at your company. Tell people you’re not interested in scoring political points but only in getting the job done. It might not work, but unless you bring the matter up, there’s no chance at all that they will believe you. So if a co-worker is unavailable, and you have to act on that person’s behalf, consider saying to that person, “I had to act because of your absence. I wasn’t trying to go behind your back and I wasn’t trying to show you up.”
#8: Document thingsNothing saves a job or career more than having a written record. If you believe a matter will come back to haunt you, make sure you keep a record of the matter, either via e-mail or document. Documentation is also an effective way to highlight of your own accomplishments, which can help you when your performance evaluation is conducted.
#9: Set incentives to foster teamworkIf you’re a manager or senior executive, take a close look at your incentives. Are you unwittingly setting up your staff to work against each other? Do your metrics address only individual departments, or do they also address how departments could benefit the larger organization?
For example, suppose the hardware department of Sears reduced all its prices by half. If you measured only profitability of the department, you would conclude that it is performing horribly. However, that measurement would neglect to account for increased volume in all other departments because of the hardware department.
If you reward employees in a department based only on how well that department does, you may inadvertently cause destructive competition among departments. Each one will be competing against every other one, and all the departments could end up in a worse position. To minimize this possibility, give employees incentives based not only on department results but on organization results as well. That way, employees from different departments have more motivation to work together and less motivation to engage in destructive politics.
#10: Set an example for your staffPeople in an organization look to leadership to see how to act. Do you want your staff to refrain from negative politics? Do you want to see collaboration and teamwork instead of petty rivalries, jealousy, and back-stabbing? Act the way you want your staff to act, and they will follow you.
Tuesday, July 29, 2008
Five good security reads
Novels
The first part of the list is of novels I have read in the last year that have a strong IT security focus, are well written, and can teach the security interested IT professional something about security. If you haven’t read them yet, they should definitely be on your reading list.
They’re listed in the order I read them, which is conveniently also alphabetical order.
Cryptonomicon
This Neal Stephenson novel is a trifle unique in that it is actually two tales, each with its own plot, in one. The narrative switches between these tales regularly, one set during World War II, the other in the modern world. Specific modern technologies are often fictionalized (e.g. Finux, a thinly veiled reference to Linux, and Ordo, an encryption system that doesn’t exist in the real world but very well could), while more general technologies (e.g. cryptographic technologies in general) are entirely real.
The story introduces the reader to concepts that, for most of us, may be new. It ends up being kind of accidentally educational in that respect, presenting ideas about cryptographic currencies, principles of cryptographic technology, and some of the history of modern computing and modern cryptography in forms easily digestible for the technically inclined reader. It even presents a rather unique demonstration of basic cryptographic principles in action in the form of the Solitaire cipher, a cryptographic system invented by Bruce Schneier specifically for Cryptonomicon that can be employed without a computer, via a normal deck of playing cards. It’s not a trivial, toy cryptographic system, however: it is meant to be a form of strong cryptography and, in fact, when Cryptonomicon was published with the Solitaire cipher algorithm printed within its pages in the form of a Perl script, saving that script in a file on a computer in the US and emailing it to someone in another country would have violated US munitions export laws because it qualifies as “strong encryption”.
Halting State
Probably the least directly educational of the three, this novel by Charles Stross is most interesting for its speculations on virtual currencies, virtual realities in meatspace, cyber-terrorism, and the social implications of all of the above. The primary characters are involved in the investigation of what starts out looking like the “robbery” of a virtual bank in a near-future MMORPG, but quickly spins out of control as they discover that all is not as it at first seems.
It is written primarily in the second person, reminiscent of old text based adventure games, which I found a little difficult to get into at first — especially with the switching between perspective characters in different chapters. It’s an engrossing tale, with a well constructed plot, however.
Little Brother
Cory Doctorow set out to write this novel for “young adults” (i.e. teenagers), with an intentionally educational thread throughout. The main character, a high school student with a perhaps more than healthy interest in learning what others don’t want him to know (and using that knowledge), is a hacker in the original sense who, written in the first person perspective, spends a fair bit of time explaining matters of IT security to the reader.
Little Brother is probably the best-written work of fiction that doubles as an educational text I have ever read, in part because it presents basic concepts within the context of the story and encourages the reader to pursue further knowledge on his or her own. If you read the entire novel and don’t find yourself inspired to read more on the subjects and concepts presented, you may just not be cut out to be a technologist at all. It’s the kind of book I wish I had in my hands when I was thirteen — but even now, about two decades older, it was a thoroughly enjoyable and inspiring read.
The plot surrounds the events following a terrorist attack on the Bay Bridge in San Francisco, in a future so near it was quite a while before I was sure it wasn’t written to basically take place in the present. Politically, it looks like it may take place around 2011 some time, though it is flexible enough that it might believably take place any time in the next decade. The technologies are essentially the technologies we know today, with a few specific additions that could well arise in the next few years.
Like usual, Doctorow’s challenges to the dominant paradigm go beyond the content of his fiction: this novel is not only available at bookstores and libraries, but also as a free download under the terms of a Creative Commons license. If you like reading full-length novels in digital file formats, you can get it there as a plain text, PDF, or HTML formatted file. I personally prefer having a physical book in my hands, so that’s the form of the novel I read.
For a more personal take on Little Brother, check out my brief review in my personal Weblog.
Related reading
The second part of the list is works that aren’t novels — in one case, a book-length essay on the development of operating systems, and in the other a collection of short stories.
In the Beginning was the Command Line
People who enjoy Cryptonomicon may also want to read Stephenson’s In the Beginning was the Command Line, a lengthy essay examining the history of operating systems. It was written in the late 1990s, and is a little dated now, but the lessons it conveys are no less valuable. While it doesn’t directly address security, it does provide some insights into the design philosophies and necessities of operating systems, the collective mindset of their users, and other matters that provide a basis for understanding the security characteristics of systems incorporating various OSes and real-life end users. It has been published as a short book, but is also available for download as a Mac Stuffit or Zip compressed plain text file, free of charge. Among the rest of the works in this list, this is the only one I read for the first time before 17 July 2007. I have read it several times, however, the most recent being a few months ago. It’s not only worth reading once — it’s worth revisiting.
Overclocked: Stores of the Future Present
Doctorow’s Overclocked: Stories of the Future Present is a collection of short stories by the author of Little Brother. Many of them, individually, seem tailor-made to challenge the comfortable preconceptions of the modern technologist, illustrating in science fiction prose the possible consequences of contemporary technology policy. Like Little Brother, and most if not all the rest of Doctorow’s fiction, it is available as a free download as well as in dead-tree hardcopy editions.
Recommendations
If you’re a technology enthusiast, and there’s anything in the above list of works that you haven’t read, you should rectify that oversight soon. They’re all well written, informative, and often inspiring. Three of them are even available for free online, so the excuses for failing to read them lie somewhere between slim and none.
The first part of the list is of novels I have read in the last year that have a strong IT security focus, are well written, and can teach the security interested IT professional something about security. If you haven’t read them yet, they should definitely be on your reading list.
They’re listed in the order I read them, which is conveniently also alphabetical order.
Cryptonomicon
This Neal Stephenson novel is a trifle unique in that it is actually two tales, each with its own plot, in one. The narrative switches between these tales regularly, one set during World War II, the other in the modern world. Specific modern technologies are often fictionalized (e.g. Finux, a thinly veiled reference to Linux, and Ordo, an encryption system that doesn’t exist in the real world but very well could), while more general technologies (e.g. cryptographic technologies in general) are entirely real.
The story introduces the reader to concepts that, for most of us, may be new. It ends up being kind of accidentally educational in that respect, presenting ideas about cryptographic currencies, principles of cryptographic technology, and some of the history of modern computing and modern cryptography in forms easily digestible for the technically inclined reader. It even presents a rather unique demonstration of basic cryptographic principles in action in the form of the Solitaire cipher, a cryptographic system invented by Bruce Schneier specifically for Cryptonomicon that can be employed without a computer, via a normal deck of playing cards. It’s not a trivial, toy cryptographic system, however: it is meant to be a form of strong cryptography and, in fact, when Cryptonomicon was published with the Solitaire cipher algorithm printed within its pages in the form of a Perl script, saving that script in a file on a computer in the US and emailing it to someone in another country would have violated US munitions export laws because it qualifies as “strong encryption”.
Halting State
Probably the least directly educational of the three, this novel by Charles Stross is most interesting for its speculations on virtual currencies, virtual realities in meatspace, cyber-terrorism, and the social implications of all of the above. The primary characters are involved in the investigation of what starts out looking like the “robbery” of a virtual bank in a near-future MMORPG, but quickly spins out of control as they discover that all is not as it at first seems.
It is written primarily in the second person, reminiscent of old text based adventure games, which I found a little difficult to get into at first — especially with the switching between perspective characters in different chapters. It’s an engrossing tale, with a well constructed plot, however.
Little Brother
Cory Doctorow set out to write this novel for “young adults” (i.e. teenagers), with an intentionally educational thread throughout. The main character, a high school student with a perhaps more than healthy interest in learning what others don’t want him to know (and using that knowledge), is a hacker in the original sense who, written in the first person perspective, spends a fair bit of time explaining matters of IT security to the reader.
Little Brother is probably the best-written work of fiction that doubles as an educational text I have ever read, in part because it presents basic concepts within the context of the story and encourages the reader to pursue further knowledge on his or her own. If you read the entire novel and don’t find yourself inspired to read more on the subjects and concepts presented, you may just not be cut out to be a technologist at all. It’s the kind of book I wish I had in my hands when I was thirteen — but even now, about two decades older, it was a thoroughly enjoyable and inspiring read.
The plot surrounds the events following a terrorist attack on the Bay Bridge in San Francisco, in a future so near it was quite a while before I was sure it wasn’t written to basically take place in the present. Politically, it looks like it may take place around 2011 some time, though it is flexible enough that it might believably take place any time in the next decade. The technologies are essentially the technologies we know today, with a few specific additions that could well arise in the next few years.
Like usual, Doctorow’s challenges to the dominant paradigm go beyond the content of his fiction: this novel is not only available at bookstores and libraries, but also as a free download under the terms of a Creative Commons license. If you like reading full-length novels in digital file formats, you can get it there as a plain text, PDF, or HTML formatted file. I personally prefer having a physical book in my hands, so that’s the form of the novel I read.
For a more personal take on Little Brother, check out my brief review in my personal Weblog.
Related reading
The second part of the list is works that aren’t novels — in one case, a book-length essay on the development of operating systems, and in the other a collection of short stories.
In the Beginning was the Command Line
People who enjoy Cryptonomicon may also want to read Stephenson’s In the Beginning was the Command Line, a lengthy essay examining the history of operating systems. It was written in the late 1990s, and is a little dated now, but the lessons it conveys are no less valuable. While it doesn’t directly address security, it does provide some insights into the design philosophies and necessities of operating systems, the collective mindset of their users, and other matters that provide a basis for understanding the security characteristics of systems incorporating various OSes and real-life end users. It has been published as a short book, but is also available for download as a Mac Stuffit or Zip compressed plain text file, free of charge. Among the rest of the works in this list, this is the only one I read for the first time before 17 July 2007. I have read it several times, however, the most recent being a few months ago. It’s not only worth reading once — it’s worth revisiting.
Overclocked: Stores of the Future Present
Doctorow’s Overclocked: Stories of the Future Present is a collection of short stories by the author of Little Brother. Many of them, individually, seem tailor-made to challenge the comfortable preconceptions of the modern technologist, illustrating in science fiction prose the possible consequences of contemporary technology policy. Like Little Brother, and most if not all the rest of Doctorow’s fiction, it is available as a free download as well as in dead-tree hardcopy editions.
Recommendations
If you’re a technology enthusiast, and there’s anything in the above list of works that you haven’t read, you should rectify that oversight soon. They’re all well written, informative, and often inspiring. Three of them are even available for free online, so the excuses for failing to read them lie somewhere between slim and none.
Bignum arithmetic and premature optimization
Donald Knuth, the patron saint of algorithm analysis, once famously said “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.” Programmers of a thoughtful bent constantly argue over what this means, and at times whether it is even true. Mostly, they ignore its effect on security.
As new programming languages become ever-more “high level” and dynamic, they get further and further from forcing the programmer to cater to the way computers “think”. This provides significant advantages for developing software swiftly and easily, sometimes at significant costs to the efficiency of the code itself. Moore’s Law, however, ensures that for many (if not most) cases those efficiency costs are absorbed by the hardware so thoroughly that users never see the difference, at least in a one-to-one comparison of general software functionality. In fact, for the same general functionality, software written in a higher level language will often outperform software written in a lower level language, if each is run on hardware contemporary with the language’s inception.
Of course, featuritis — a separate phenomenon entirely — often adds far greater weight to an application that combines with the greater resource usage of higher level dynamic languages to slow things down to the point where we start noticing something is wrong. That, however, is an entirely separate matter.
There are those who will argue that choosing a language based on the comparative performance characteristics of programs written in that language is a case of premature optimization. When all you need is a command line utility that will complete its task in under half a second, and Ruby can fill that need, resorting to assembly language to eke maximum performance out of the program certainly seems like a bad trade, if the tendency of Ruby programs to be much easier to write and maintain is considered.
There is certainly a case to be made for lower level languages contributing to greater security. Knowing assembly language, or even a higher level “portable assembly” language such as C, helps the programmer wrap his brain around the concepts of von Neumann architecture computation. Even if you write all your software in a very high level language like Ruby, knowing what’s going on “under the hood”, as it were, can yield great rewards when some careful, precise tinkering is necessary — and in understanding the implications of what you’re doing with all those high level language constructs. This applies to security implications as much as to performance, portability, and stability implications.
Don’t take anything said here as dissuading you from learning lower level, static languages such as C or assembly. Even if you never use them in earnest, knowing these languages will help you really understand what you’re doing with higher level, dynamic languages, and may help you make your code more secure.
On the other hand, high level dynamic languages such as Ruby provide a lot of time saving linguistic constructs that, often as a happy accident, actually improve the security of your code without any effort on your part. An example is “bignum” handling.
In programming languages such as C, integers have limits to how big they can get. For instance, an unsigned integer variable might be limited to 16 bits — between 0 and 216-1 (i.e. 0 to 65535). In unsigned 16 bit integer arithmetic, usually 65535 + 1 = 0, because the short integer type is incapable of representing a numeric value outside the range of 0-65535. In some cases, trying to stick a larger value than a data type can handle into a variable of that data type can crash the program, provide improper access to memory, or cause any of a number of other potential security issues. For this reason, programmers in languages like C need to be careful about how they use limited precision data types.
Arbitrary precision arithmetic, also known as “bignum arithmetic”, is an arithmetic technique implemented in a programming language whereby the extent of an integer’s value is limited only by the restrictions of the hardware itself — essentially, by how much RAM the system has. This can, for instance, take the form of an automatic extension of the value that can be handled by the data type as it is needed, rather than limiting the value to an extent defined before a value is entered into a variable or otherwise handled by the program. As this greatly reduces the inherent danger of accepting overly large inputs, bignum arithmetic can prove a great boon to the security of a program.
Such arbitrary precision arithmetic capabilities can be had with languages such as C, via libraries like BigDigits, the GNU MPL, and CLN, but this is not the default behavior of the language and requires explicit use by the programmer. Languages such as Ruby, on the other hand, employ bignum arithmetic by default, as it is needed, without requiring any intervention on the part of the programmer to specify that extensibility of the values that can be handled by numeric data types. It’s important to understand concepts like fixed integer arithmetic, of course, but it’s not important to use it all the time — or even most of the time.
There are programmers who would complain at this implication, because arbitrary precision arithmetic generally imposes an efficiency penalty on programs that make use of it. In most cases, however, such concerns constitute a case of Knuth’s “premature optimization”, because unnecessary use of fixed precision arithmetic can lead to surprising behavior from your software if you make a mistake in development and some unexpected input overflows an integer.
For security purposes, it’s generally the case that Ruby’s way of doing this is the right way to do it: default to avoiding the all too common dangers of fixed precision arithmetic altogether. The only fly in the ointment is the rare occasion where the performance penalties of arbitrary precision arithmetic really matters — or the rare field of endeavor where it matters often.
When the importance of a nanosecond improvement in runtime is not needed, choose the tools that will make it easy to write more secure code.
As new programming languages become ever-more “high level” and dynamic, they get further and further from forcing the programmer to cater to the way computers “think”. This provides significant advantages for developing software swiftly and easily, sometimes at significant costs to the efficiency of the code itself. Moore’s Law, however, ensures that for many (if not most) cases those efficiency costs are absorbed by the hardware so thoroughly that users never see the difference, at least in a one-to-one comparison of general software functionality. In fact, for the same general functionality, software written in a higher level language will often outperform software written in a lower level language, if each is run on hardware contemporary with the language’s inception.
Of course, featuritis — a separate phenomenon entirely — often adds far greater weight to an application that combines with the greater resource usage of higher level dynamic languages to slow things down to the point where we start noticing something is wrong. That, however, is an entirely separate matter.
There are those who will argue that choosing a language based on the comparative performance characteristics of programs written in that language is a case of premature optimization. When all you need is a command line utility that will complete its task in under half a second, and Ruby can fill that need, resorting to assembly language to eke maximum performance out of the program certainly seems like a bad trade, if the tendency of Ruby programs to be much easier to write and maintain is considered.
There is certainly a case to be made for lower level languages contributing to greater security. Knowing assembly language, or even a higher level “portable assembly” language such as C, helps the programmer wrap his brain around the concepts of von Neumann architecture computation. Even if you write all your software in a very high level language like Ruby, knowing what’s going on “under the hood”, as it were, can yield great rewards when some careful, precise tinkering is necessary — and in understanding the implications of what you’re doing with all those high level language constructs. This applies to security implications as much as to performance, portability, and stability implications.
Don’t take anything said here as dissuading you from learning lower level, static languages such as C or assembly. Even if you never use them in earnest, knowing these languages will help you really understand what you’re doing with higher level, dynamic languages, and may help you make your code more secure.
On the other hand, high level dynamic languages such as Ruby provide a lot of time saving linguistic constructs that, often as a happy accident, actually improve the security of your code without any effort on your part. An example is “bignum” handling.
In programming languages such as C, integers have limits to how big they can get. For instance, an unsigned integer variable might be limited to 16 bits — between 0 and 216-1 (i.e. 0 to 65535). In unsigned 16 bit integer arithmetic, usually 65535 + 1 = 0, because the short integer type is incapable of representing a numeric value outside the range of 0-65535. In some cases, trying to stick a larger value than a data type can handle into a variable of that data type can crash the program, provide improper access to memory, or cause any of a number of other potential security issues. For this reason, programmers in languages like C need to be careful about how they use limited precision data types.
Arbitrary precision arithmetic, also known as “bignum arithmetic”, is an arithmetic technique implemented in a programming language whereby the extent of an integer’s value is limited only by the restrictions of the hardware itself — essentially, by how much RAM the system has. This can, for instance, take the form of an automatic extension of the value that can be handled by the data type as it is needed, rather than limiting the value to an extent defined before a value is entered into a variable or otherwise handled by the program. As this greatly reduces the inherent danger of accepting overly large inputs, bignum arithmetic can prove a great boon to the security of a program.
Such arbitrary precision arithmetic capabilities can be had with languages such as C, via libraries like BigDigits, the GNU MPL, and CLN, but this is not the default behavior of the language and requires explicit use by the programmer. Languages such as Ruby, on the other hand, employ bignum arithmetic by default, as it is needed, without requiring any intervention on the part of the programmer to specify that extensibility of the values that can be handled by numeric data types. It’s important to understand concepts like fixed integer arithmetic, of course, but it’s not important to use it all the time — or even most of the time.
There are programmers who would complain at this implication, because arbitrary precision arithmetic generally imposes an efficiency penalty on programs that make use of it. In most cases, however, such concerns constitute a case of Knuth’s “premature optimization”, because unnecessary use of fixed precision arithmetic can lead to surprising behavior from your software if you make a mistake in development and some unexpected input overflows an integer.
For security purposes, it’s generally the case that Ruby’s way of doing this is the right way to do it: default to avoiding the all too common dangers of fixed precision arithmetic altogether. The only fly in the ointment is the rare occasion where the performance penalties of arbitrary precision arithmetic really matters — or the rare field of endeavor where it matters often.
When the importance of a nanosecond improvement in runtime is not needed, choose the tools that will make it easy to write more secure code.
Five ways to show business value of M-F authentication
There’s more to selecting an enterprise second-factor authentication method than meets the retina scanner. As with any IT project, each dollar spent must produce business value. With M-F authentication, this translates to value beyond simply verifying an employee’s identity.
Too often, security professionals are mesmerized by regulatory or best practice multi-factor (M-F) authentication mantras. They don’t see that selling M-F solutions to management requires more than a strategically placed HIPAA, SOX, or CoBIT two-by-four. Besides, using regulatory requirements to squeeze additional security dollars out of the IT budget is an argument with diminishing returns.
There are five basic characteristics of an M-F solution that affect its potential for showing business value: an acceptable probability of success in verifying identity, easy enrollment, enhanced productivity, enables single sign on (SSO), and user acceptance.
1. Achieves business-defined probability of success in verifying identity – This is the obvious function of an M-F solution. It should supplement the primary authentication method, usually password-based, by meeting a business-defined threshold for positive verification.Expecting an M-F method to produce 100 percent accuracy is the first mistake of many security managers. Even the effectiveness of finger-print recognition is determined by its error rate. Unless you’re guarding the crown jewels or defense department secrets, the cost of solutions that achieve zero errors is usually higher than necessary to achieve reasonable and appropriate protection. The level of success necessary depends on the strength of your passwords, business tolerance for risk, and the existence and effectiveness of other access controls.
2. Easy enrollment – Enrollment should take less than two minutes and be easily integrated into the new-hire process. Presenting a solution to management that requires employees to juggle three balls while whistling Dixie is not going to help your cause. For example, I just looked at a solution last week that required users to answer over 60 questions to get set up. The solution, currently an academic exercise only, achieved a probability of success that was high enough, but enrollment challenges make it almost impossible to gain management acceptance.
3. Enhances productivity – The user experience should be improved, eliminating existing authentication challenges that go beyond regulatory compliance. In fact, selling a solution to management might require demonstrating how it can solve other issues. For example, many health care organizations deploy shared computers to nurses stations. Several nurses use these devices, logging in many times, during each shift. Their ability to provide care might be enhanced by an M-F solution that quickly verifies their identity and performs fast user switching, eliminating lost time dealing with system authentication issues. Proximity detection can make this happen before the nurse even gets to the keyboard.Another enhancement is SSO-like functionality. Although users have to authenticate to each application, the use of M-F technology can often eliminate the need to enter a user ID and password every time.
4. Enables SSO – The M-F solution should be compatible with future SSO implementations. Selecting an M-F technology without considering SSO is a big mistake. The cost of M-F solutions can be high, and ripping it out if it isn’t compatible with the SSO technology you choose is a career-limiting exercise. According to Forrester, the best approach is selecting an SSO solution first, even if implementation is two to three years in the future. Implementation of an M-F solution should be within the context of your SSO vision. Share that vision with management, positioning your biometrics or smart-card solution as an incremental step toward an improved user experience.
5. Acceptable to users – The solution must be easy to use and actually improve the way users see the security that protects information assets. Nothing kills an M-F rollout faster than user revolt. User resistance is often based on one or more of the following,
- Fear that the company stores unique personal information
- Fear that the company is collecting personal health information (retinal scans look at patterns that are also used to determine certain health conditions) for insurance purposes
- Fear that the red light in retinal scanning sensors is physically harmful
- Fear of contracting diseases through contact with publicly used sensors
- High error rate, without an easy alternative to logging in
The first four bullets under the fifth business value characteristic can be assuaged with pre-rollout discussions with users or user representatives, helping them understand the actual facts about the M-F technology selected. The last item is a technology challenge.
As I wrote early in this post, M-F technology isn’t perfect. There will be errors. One error that frustrates users is a rejection of authorized login attempts. Frustration levels can be controlled by ensuring your solution includes an easy way to deal with these issues as they arise. Remember, this is supposed to improve user experience
Too often, security professionals are mesmerized by regulatory or best practice multi-factor (M-F) authentication mantras. They don’t see that selling M-F solutions to management requires more than a strategically placed HIPAA, SOX, or CoBIT two-by-four. Besides, using regulatory requirements to squeeze additional security dollars out of the IT budget is an argument with diminishing returns.
There are five basic characteristics of an M-F solution that affect its potential for showing business value: an acceptable probability of success in verifying identity, easy enrollment, enhanced productivity, enables single sign on (SSO), and user acceptance.
1. Achieves business-defined probability of success in verifying identity – This is the obvious function of an M-F solution. It should supplement the primary authentication method, usually password-based, by meeting a business-defined threshold for positive verification.Expecting an M-F method to produce 100 percent accuracy is the first mistake of many security managers. Even the effectiveness of finger-print recognition is determined by its error rate. Unless you’re guarding the crown jewels or defense department secrets, the cost of solutions that achieve zero errors is usually higher than necessary to achieve reasonable and appropriate protection. The level of success necessary depends on the strength of your passwords, business tolerance for risk, and the existence and effectiveness of other access controls.
2. Easy enrollment – Enrollment should take less than two minutes and be easily integrated into the new-hire process. Presenting a solution to management that requires employees to juggle three balls while whistling Dixie is not going to help your cause. For example, I just looked at a solution last week that required users to answer over 60 questions to get set up. The solution, currently an academic exercise only, achieved a probability of success that was high enough, but enrollment challenges make it almost impossible to gain management acceptance.
3. Enhances productivity – The user experience should be improved, eliminating existing authentication challenges that go beyond regulatory compliance. In fact, selling a solution to management might require demonstrating how it can solve other issues. For example, many health care organizations deploy shared computers to nurses stations. Several nurses use these devices, logging in many times, during each shift. Their ability to provide care might be enhanced by an M-F solution that quickly verifies their identity and performs fast user switching, eliminating lost time dealing with system authentication issues. Proximity detection can make this happen before the nurse even gets to the keyboard.Another enhancement is SSO-like functionality. Although users have to authenticate to each application, the use of M-F technology can often eliminate the need to enter a user ID and password every time.
4. Enables SSO – The M-F solution should be compatible with future SSO implementations. Selecting an M-F technology without considering SSO is a big mistake. The cost of M-F solutions can be high, and ripping it out if it isn’t compatible with the SSO technology you choose is a career-limiting exercise. According to Forrester, the best approach is selecting an SSO solution first, even if implementation is two to three years in the future. Implementation of an M-F solution should be within the context of your SSO vision. Share that vision with management, positioning your biometrics or smart-card solution as an incremental step toward an improved user experience.
5. Acceptable to users – The solution must be easy to use and actually improve the way users see the security that protects information assets. Nothing kills an M-F rollout faster than user revolt. User resistance is often based on one or more of the following,
- Fear that the company stores unique personal information
- Fear that the company is collecting personal health information (retinal scans look at patterns that are also used to determine certain health conditions) for insurance purposes
- Fear that the red light in retinal scanning sensors is physically harmful
- Fear of contracting diseases through contact with publicly used sensors
- High error rate, without an easy alternative to logging in
The first four bullets under the fifth business value characteristic can be assuaged with pre-rollout discussions with users or user representatives, helping them understand the actual facts about the M-F technology selected. The last item is a technology challenge.
As I wrote early in this post, M-F technology isn’t perfect. There will be errors. One error that frustrates users is a rejection of authorized login attempts. Frustration levels can be controlled by ensuring your solution includes an easy way to deal with these issues as they arise. Remember, this is supposed to improve user experience
Use tcpdump for traffic analysis
The tcpdump tool is an old mainstay of network debugging and security monitoring, and security experts all over the world swear by its usefulness. It is a command line tool that eschews all the makeup and jewelry of other traffic analysis tools such as Ettercap and Wireshark, both of which provide packet sniffing functionality with a convenient captive interface. In contrast to such tools, tcpdump takes a command at the shell, with options specified at that time, and dumps the results to standard output. This may seem primitive to some users, but it provides power and flexibility that isn’t available with the common captive interface alternatives.
Options
The tcpdump utility provides dozens of options, but I’ll just cover a few of them here:
-A: Print each packet in ASCII.
-c N: Where the letter N is a number, this option tells tcpdump to exit after N packets.
-i interface: Capture packets on the specified network interface.
-n: Don’t resolve addresses to names.
-q: Provide less verbose (”quiet”) output so output lines are shorter.
-r filename: Read packets from the specified file rather than a network interface. This is usually used after raw packets have been logged to a file with the -w option.
-t: Don’t print a timestamp on each line of output.
-v: Provide more verbose output. Verbosity can be increased more with -vv, and even more than that with -vvv.
-w filename: Write raw packets to the specified file.
Expressions
The tcpdump utility also supports command-line expressions, used to define filtering rules so that you get exactly the traffic you want to see, ignoring “uninteresting” packets. Expressions consist of a number of primitives and, optionally, modifier terms. The following primitives and modifiers do not constitute a comprehensive list, but they are among the most commonly useful.
Primitives
dst foo: Specify an address or hostname to limit captured packets to traffic sent to a particular host.
host foo: Specify an address or hostname to limit captured packets to traffic to and from a particular host.
net foo: Specify a network or network segment using CIDR notation to limit packet capture.
proto foo: Specify a protocol to limit captured packets to network traffic using that protocol.
src foo: Specify an address or hostname to limit captured packets to traffic sent by a particular host.
Modifiers
and: Use this to chain together primitives when you want to limit captured packets to those that meet the requirements of the expressions on both sides of the and.
not: Use this modifier just before a primitive when you want to limit captured packets to those that do not meet the requirements of the following expresssion.
or: Use this to chain together primitives when you want to limit captured packets to those that meet the requirements of one or more of the expressions on either side of the or.
Examples
All of these options and expression primitives and modifiers, along with others listed in the tcpdump manpage, can be used to construct very specific commands that produce very precise output.
tcpdump -c 50 dst foo can give you information that may help identify the source of heavy incoming traffic targeting an overloaded server with hostname “foo”, dumping the first 50 packets as output.
tcpdump -c 500 -w `date +"%Y%j%T"`.log dumps 500 packets to a file named with a current time/date stamp (e.g. 200820715:16:31.log) so that they can later be filtered according to the information you want to see. I have the command date +"%Y %j%T" aliased to stamp in my shell’s rc file, so I can shorten a command like this to tcpdump -c 500 -w `stamp`.log, saving me from having to remember all the formatting options for the date command off the top of my head.
tcpdump proto ssh src or dst foo and src and dst not bar produces ongoing output that shows all SSH activity originating from or targeting host “foo” unless it is originating from or targeting host “bar”. If foo is only supposed to be accessed via SSH by bar, this command will allow ongoing monitoring of unauthorized SSH traffic to and from foo. You could even start a number of persistent monitoring processes with tcpdump like this within a tmux session on a dedicated monitoring server.
As you can no doubt see, tcpdump’s expressions capabilities are roughly equivalent to a simple domain specific programming language that is extremely easy to understand. With that kind of power and flexibility at my fingertips, there’s little need to use anything else for general traffic analysis tasks.
Options
The tcpdump utility provides dozens of options, but I’ll just cover a few of them here:
-A: Print each packet in ASCII.
-c N: Where the letter N is a number, this option tells tcpdump to exit after N packets.
-i interface: Capture packets on the specified network interface.
-n: Don’t resolve addresses to names.
-q: Provide less verbose (”quiet”) output so output lines are shorter.
-r filename: Read packets from the specified file rather than a network interface. This is usually used after raw packets have been logged to a file with the -w option.
-t: Don’t print a timestamp on each line of output.
-v: Provide more verbose output. Verbosity can be increased more with -vv, and even more than that with -vvv.
-w filename: Write raw packets to the specified file.
Expressions
The tcpdump utility also supports command-line expressions, used to define filtering rules so that you get exactly the traffic you want to see, ignoring “uninteresting” packets. Expressions consist of a number of primitives and, optionally, modifier terms. The following primitives and modifiers do not constitute a comprehensive list, but they are among the most commonly useful.
Primitives
dst foo: Specify an address or hostname to limit captured packets to traffic sent to a particular host.
host foo: Specify an address or hostname to limit captured packets to traffic to and from a particular host.
net foo: Specify a network or network segment using CIDR notation to limit packet capture.
proto foo: Specify a protocol to limit captured packets to network traffic using that protocol.
src foo: Specify an address or hostname to limit captured packets to traffic sent by a particular host.
Modifiers
and: Use this to chain together primitives when you want to limit captured packets to those that meet the requirements of the expressions on both sides of the and.
not: Use this modifier just before a primitive when you want to limit captured packets to those that do not meet the requirements of the following expresssion.
or: Use this to chain together primitives when you want to limit captured packets to those that meet the requirements of one or more of the expressions on either side of the or.
Examples
All of these options and expression primitives and modifiers, along with others listed in the tcpdump manpage, can be used to construct very specific commands that produce very precise output.
tcpdump -c 50 dst foo can give you information that may help identify the source of heavy incoming traffic targeting an overloaded server with hostname “foo”, dumping the first 50 packets as output.
tcpdump -c 500 -w `date +"%Y%j%T"`.log dumps 500 packets to a file named with a current time/date stamp (e.g. 200820715:16:31.log) so that they can later be filtered according to the information you want to see. I have the command date +"%Y %j%T" aliased to stamp in my shell’s rc file, so I can shorten a command like this to tcpdump -c 500 -w `stamp`.log, saving me from having to remember all the formatting options for the date command off the top of my head.
tcpdump proto ssh src or dst foo and src and dst not bar produces ongoing output that shows all SSH activity originating from or targeting host “foo” unless it is originating from or targeting host “bar”. If foo is only supposed to be accessed via SSH by bar, this command will allow ongoing monitoring of unauthorized SSH traffic to and from foo. You could even start a number of persistent monitoring processes with tcpdump like this within a tmux session on a dedicated monitoring server.
As you can no doubt see, tcpdump’s expressions capabilities are roughly equivalent to a simple domain specific programming language that is extremely easy to understand. With that kind of power and flexibility at my fingertips, there’s little need to use anything else for general traffic analysis tasks.
Subscribe to:
Posts (Atom)