![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/832
Trapped In The Past
Back in the day we had Client / Server. We had dumb terminals tied to a big old mainframe that did all the hard work for us. Then, we got into the world of the PC, where each computer had some computing power of its own and was able to work individually, outside of the mainframe environment. However, as these PC's were integrated in the fabric of business networking, many still adhered to the Client / Server model. But, as time progressed and PC's got more powerful and more capable we began to see the rise of larger and more complicated processing requests on the mainframes. Graphics and 3D modeling became more prevalent thanks to systems from Intergraph and the introduction of AutoCAD and later 3D Studio on the PC. With the introduction and eventual acceptance of Microsoft Windows on the PC platform, mainframes became more and more overloaded. The amount of data being transferred over networks was becoming prohibitive, and it became a real drain to host graphical applications in the Client / Server model.
The Shifting Burden
It was during this evolutionary stretch of time that we began to see the proliferation of empowered Peers, or individual workstations that executed their applications locally and later synchronized with the mainframe data. Applications like dBase, 1-2-3 and Wordstar enabled users on each machine to complete much of their work locally and as personal printers became less expensive and more widely available from vendors like Okidata and Epson, it was a whole lot easier to get your work done without ever touching the mainframe.
While a full-blown Client / Server model may have decided advantages in certain areas, there are also some very serious disadvantages. Performance can be one of them. The more people you have running applications from a Host Server, the slower the response to each individual session seems to be. In addition, if the server crashes, the hosted applications can become unavailable, potentially reducing the entire level of office productivity to zero. It is an extreme example, perhaps, but it is a point well made.
Slowly, we have seen the development of hybrid configurations, where servers no longer host most applications, but they do still host master data files. Now local applications can make a request for a local copy of all or part of the data hosted on a server, work with it independently, then send it back to the server for synchronization when finished. Since the server only has to deal with data hosting instead of application hosting, it is able to respond to more requests with fewer processing resources, thus increasing its rate of transaction processing and in essence, providing more for less. On the Server side, ROI (Return On Investment) goes up, and productivity killing downtime is reduced. Since much of the burden is shifted to individual client machines, you no longer have the "domino effect" to deal with when failures occur. No longer will one hardware failure incapacitate an entire group of workers. Instead, failures are often localized and can be analyzed and repaired without causing downtime for the rest of the workforce.
The Potential Downside
License compliance can be a concern as one moves from a strict Client / Server model to a hybrid system. When applications are hosted on a Server, you can limit the number of users to whatever figure your concurrent licensing agreement provides for. If you have 50 concurrent AutoCAD licenses, you can enforce a strict limit of 50 concurrent users with little effort. Documenting compliance is much easier since it can be done from a single source. However, when you move the application hosting to individual machines, it can become burdensome to document and enforce full compliance. It can also become a hassle to configure a wide-spread upgrade deployment scheme. While there may be products like SMS and Tivoli to help push products to the desktop and document installed and executed applications, it adds another level of complexity to the mix that would not be there in a full Client / Server model. Investing in the software and the expertise needed to take full advantage is no low-impact item. It can in fact be a budget-buster while the kinks are worked out at the enterprise level. In our previous Linux article we mentioned the increasing cost of a blanket license now being required by Microsoft and some other companies depending on site specifics. A license of 50 concurrent users may not be as easy to obtain if the vendor has concerns about compliance. Instead, they may insist upon per-seat licenses for each and every employee who may have access to the software, upping costs noticeably. It may all depend on your specific configuration, but it is a possibility worthy of some investigation if you are trying to determine the cost impact of such a move.
In addition to licensing, you have the increased hardware costs. Historically, these have been quite high, and has been one of the many reasons why companies have seen infrastructure costs soar to record levels. However, with the drastic drop in PC system and component prices and adjustments to the depreciation table by Federal tax regulators, the edge has been taken off of these expenditures and it is possible to realize a tangible return on investment in a much shorter time.
The Case For Dedicated Peer To Peer
As technology has progressed and costs have soared, I have seen first hand a movement that is somewhat surprising. Dedicated Peer to Peer configurations in small businesses are becoming more common than ever. With the advent of DSL and the increased speed and bandwidth it provides, it is possible to purchase a $100 router and enable simultaneous internet access to a theoretical limit of 253 peer stations. Since most small businesses typically have less than 50 employees, this limit is not usually tested and may not be realistic, but I have seen actual cases where 100 user configurations share a single DSL access point and it was quite successful. No longer is it necessary to purchase expensive Cisco equipment and install a complex infrastructure. You can instead plug a simple phone cord into a DSL modem, connect that modem to a simple router and have that router provide access to your entire employee base. Many of these routers are also active DHCP servers, which gives you another feature typically associated with servers in a dedicated Peer to Peer configuration.
In addition to this exciting new technology, I have seen a dramatic lowering in the costs of internet system outsourcing. It is not uncommon to find packages for under $250 a month that provide you with 50 megabytes of web space, plenty of bandwidth and 100 or more individual Pop3 mail boxes. The complete set of Front Page extensions is common in many of these packages, allowing you to take advantage of some of the advanced data features provided by such custom configurations should you wish to go that route. These packages are often easily administrated from a simple web interface in their entirety, meaning that the need for dedicated web server experts is reduced or eliminated entirely. You can simply assign one or two key individuals the task of maintaining the configuration and they should be able to address any needs that do arise via a web browser and a secure, encrypted logon. Since the outsourcing is a service, you may find that the host is more than willing to help train and troubleshoot as problems crop up. Now, instead of investing in an expensive server based infrastructure for your web and internet hosting, and the one or two experts to run them, you can defer the costs almost entirely and have many of the same benefits with less downtime and maintenance. You may even be able to obtain uptime guarantees, and you can certain shop around for competitive bargains should you feel the need.
The Super Peer
Over the years, I have seen some companies take an innovative approach to resource sharing in a dedicated Peer to Peer environment. Since the cost of individual machines and peripherals has dropped so low, it is easy to purchase and configure "Super Peers" that serve primarily as shared workstations, but can also become productivity stations should the need arise. For instance, one well configured workstation in each "cluster" of employees can be host to a scanner, a CD-RW drive with burn-proof technology, a color printer, a high-volume black and white laser printer and perhaps even a plotter and a tape backup device. Thanks to USB, it is quite easy to configure and share multiple printers and plotters on that one Super Peer station, making them readily available to those with network access.
The unit can be outfitted with large IDE drives for very little cost and also used as a data storage and backup point in that shared environment. You could easily purchase three additional 100 gigabyte IDE drives for less than $900 and allow some of the space be used for personal folders, project folders and more. You could even designate some of the space for a data clearing house, where employees may transfer entire projects to another group by copying all of the relevant files to the Super Peer drive and allowing that other group to move them onto another local machine. Employees could store completed files in a read-only project archive so that they could be made available to all other employees with access to that share.
If you need a series of document scanned, you could simply sit down at the Super Peer station, complete the task, store the data in a shared folder and access it from your individual workstation. You could copy your data from your machine to a shared folder, head over to the Super Peer and burn it on to a CD-R that you can then mail off to a client. Employees can store backup copies of their important data files on the Super Peer drives and thanks to the miracle of scheduled backups, can be archived nightly to the DAT drive attached to that same Super Peer station. I have even seen one small company equip the Super Peer with a Matrox G400-TV capture card and a VCR and use it to digitize video they shot for the creation of a safety training video. They encoded the video, stored it on a shared drive and then edited a local copy of the movie on a separate workstation with their own copy of Adobe Premier. It worked surprisingly well.
In addition to all of this, the Super Peer can be used by employees or contractors as a temporary overflow workstation. It can even be used to host demonstrations for visiting clients should the need arise. With this particular configuration, you have a great deal of flexibility and a significant reduction in the overhead costs normally associated with a true Server. Client licensing is a non-issue, for example.
Things To Keep An Eye On
It is possible that administering this configuration may not be easy at times and it is certainly not going to give you the level of security that you may need in some situations. However, for many small companies where security is not a heavy concern, such as an architectural firm where they share CAD drawings, or a graphic design firm where they may share bitmaps and vector files with each other during the development of a flyer or brochure, a Super Peer configuration can be more than adequate and can provide growing companies as a way to "ease into" a Client / Server hybrid model should that need later become a reality.
It may be necessary to remind employees from time to time that the Super Peer is designed to be a supplement to their individual stations, not as a replacement. Critical files should still be stored primarily on individual machines, with a copy being placed on the shared drive. Employees should not use those shared folders as a data source while actively creating and editing files, but as an extra place to store backups of your data once you are through manipulating it.
You may want to designate one or two key people be the focal point for questions and requests to help keep the potential problems to a minimum. You would not, for example, want to encourage individual employees to install additional applications on the Super Peer. You may instead want to ensure that they touch base with one of those key people and make the request so that it can be taken care of in an orderly way at a time when it is least likely to impact other workers. You may also want to assign those personnel some general system maintenance tasks to ensure that the Super Peer continues to function as expected and maintains a high rate of up-time. You may also want to outfit that machine with a solid Uninterruptible Power Supply with Automatic Voltage Regulation to help keep problems to a minimum.
Summing It All Up
This article is designed to highlight some of the possibilities, not to put forward the ideal that this is an all-encompassing solution applicable to every small business out there. It may in fact not be workable at all for some companies, depending on their security and scalability needs. But for others, it may be a great way to help maximize the resources you do have without incurring the administrative, licensing and equipment costs associated with a true Client / Server infrastructure.
The exciting thing for many of us is that technology that was once exclusive to large companies is now becoming available to the masses at a greatly reduced cost. The advent of DSL and personal routers has changed the landscape significantly for small businesses around the country and gives them more options than they have possibly ever had before. With a fraction of the capital outlay, you can have many of the same benefits, such as pseudo print and file serving and always on internet service with individual email accounts for each employee. No longer is it necessarily to spend tens of thousands of dollars up front for large server machines and a variety of server applications. Instead, you can grow your infrastructure judiciously until your needs exceed the reach of this interim solution, if they in fact do.
Many companies in recent years have seen their IT costs soar so quickly that it has shocked them to the core of their bottom line. It can be a humbling experience to realize that you may never get back a return that matches the initial capital outlay. It can also be a devastating experience to realize that because your infrastructure costs were so much higher than expected, you may no longer have the capital you needed to keep things going they way you had intended. The recent Dot-Com implosion is surely replete with such cases. This type of solution may not be right for everybody, but I'd much rather have the flexibility to explore options like these than be limited to only one or two choices that could bankrupt me within months if I don't hit the customer mother load fast enough. A long journey, after all, starts with but a single step, and to be frank, I'd rather be the tortoise whose business survives the hard times because of frugal planning than a hare who may be a high-profile risk-taker who throws wads of cash to buy the best of the best, but ends up sipping drinks at a combination Dot-Gone party and job fair wondering where it all went wrong...