Thursday, July 10, 2008

Ethernet

In today's business world, reliable and efficient access to information has become an important asset in the quest to achieve a competitive advantage. File cabinets and mountains of papers have given way to computers that store and manage information electronically. Coworkers thousands of miles apart can share information instantaneously, just as hundreds of workers in a single location can simultaneously review research data maintained online.

Computer networking technologies are the glue that binds these elements together. The public Internet allows businesses around the world to share information with each other and their customers. The global computer network known as the World Wide Web provides services that let consumers buy books, clothes, and even cars online, or auction those same items off when no longer wanted.

In this article, we will take a very close look at networking, and in particular the Ethernet networking standard, so you can understand the actual mechanics of how all of these computers connect to one another.

Why Network?
Networking allows one computer to send information to and receive information from another. We may not always be aware of the numerous times we access information on computer networks. Certainly the Internet is the most conspicuous example of computer networking, linking millions of computers around the world, but smaller networks play a role in information access on a daily basis. Many public libraries have replaced their card catalogs with computer terminals that allow patrons to search for books far more quickly and easily. Airports have numerous screens displaying information regarding arriving and departing flights. Many retail stores feature specialized computers that handle point-of-sale transactions. In each of these cases, networking allows many different devices in multiple locations to access a shared repository of data.

Before getting into the details of a networking standard like Ethernet, we must first understand some basic terms and classifications that describe and differentiate network technologies -- so let's get started!

Local Area vs. Wide Area
We can classify network technologies as belonging to one of two basic groups. Local area network (LAN) technologies connect many devices that are relatively close to each other, usually in the same building. The library terminals that display book information would connect over a local area network. Wide area network (WAN) technologies connect a smaller number of devices that can be many kilometers apart. For example, if two libraries at the opposite ends of a city wanted to share their book catalog information, they would most likely make use of a wide area network technology, which could be a dedicated line leased from the local telephone company, intended solely to carry their data.

In comparison to WANs, LANs are faster and more reliable, but improvements in technology continue to blur the line of demarcation. Fiber optic cables have allowed LAN technologies to connect devices tens of kilometers apart, while at the same time greatly improving the speed and reliability of WANs.

The Ethernet
In 1973, at Xerox Corporation’s Palo Alto Research Center (more commonly known as PARC), researcher Bob Metcalfe designed and tested the first Ethernet network. While working on a way to link Xerox’s "Alto" computer to a printer, Metcalfe developed the physical method of cabling that connected devices on the Ethernet as well as the standards that governed communication on the cable. Ethernet has since become the most popular and most widely deployed network technology in the world. Many of the issues involved with Ethernet are common to many network technologies, and understanding how Ethernet addressed these issues can provide a foundation that will improve your understanding of networking in general.

The Ethernet standard has grown to encompass new technologies as computer networking has matured, but the mechanics of operation for every Ethernet network today stem from Metcalfe’s original design. The original Ethernet described communication over a single cable shared by all devices on the network. Once a device attached to this cable, it had the ability to communicate with any other attached device. This allows the network to expand to accommodate new devices without requiring any modification to those devices already on the network.

Ethernet Basics
Ethernet is a local area technology, with networks traditionally operating within a single building, connecting devices in close proximity. At most, Ethernet devices could have only a few hundred meters of cable between them, making it impractical to connect geographically dispersed locations. Modern advancements have increased these distances considerably, allowing Ethernet networks to span tens of kilometers.

ProtocolsIn networking, the term protocol refers to a set of rules that govern communications. Protocols are to computers what language is to humans. Since this article is in English, to understand it you must be able to read English. Similarly, for two devices on a network to successfully communicate, they must both understand the same protocols.

Ethernet Terminology
Ethernet follows a simple set of rules that govern its basic operation. To better understand these rules, it is important to understand the basics of Ethernet terminology.
· Medium - Ethernet devices attach to a common medium that provides a path along which the electronic signals will travel. Historically, this medium has been coaxial copper cable, but today it is more commonly a twisted pair or fiber optic cabling.
· Segment - We refer to a single shared medium as an Ethernet segment.
· Node - Devices that attach to that segment are stations or nodes.
· Frame - The nodes communicate in short messages called frames, which are variably sized chunks of information.

Frames are analogous to sentences in human language. In English, we have rules for constructing our sentences: We know that each sentence must contain a subject and a predicate. The Ethernet protocol specifies a set of rules for constructing frames. There are explicit minimum and maximum lengths for frames, and a set of required pieces of information that must appear in the frame. Each frame must include, for example, both a destination address and a source address, which identify the recipient and the sender of the message. The address uniquely identifies the node, just as a name identifies a particular person. No two Ethernet devices should ever have the same address.

Ethernet Medium
Since a signal on the Ethernet medium reaches every attached node, the destination address is critical to identify the intended recipient of the frame.
For example, in the figure above, when computer B transmits to printer C, computers A and D will still receive and examine the frame. However, when a station first receives a frame, it checks the destination address to see if the frame is intended for itself. If it is not, the station discards the frame without even examining its contents.

One interesting thing about Ethernet addressing is the implementation of a broadcast address. A frame with a destination address equal to the broadcast address (simply called a broadcast, for short) is intended for every node on the network, and every node will both receive and process this type of frame.

CSMA/CD
The acronym CSMA/CD signifies carrier-sense multiple access with collision detection and describes how the Ethernet protocol regulates communication among nodes. While the term may seem intimidating, if we break it apart into its component concepts we will see that it describes rules very similar to those that people use in polite conversation. To help illustrate the operation of Ethernet, we will use an analogy of a dinner table conversation.

Let’s represent our Ethernet segment as a dinner table, and let several people engaged in polite conversation at the table represent the nodes. The term multiple access covers what we already discussed above: When one Ethernet station transmits, all the stations on the medium hear the transmission, just as when one person at the table talks, everyone present is able to hear him or her.

Now let's imagine that you are at the table and you have something you would like to say. At the moment, however, I am talking. Since this is a polite conversation, rather than immediately speak up and interrupt, you would wait until I finished talking before making your statement. This is the same concept described in the Ethernet protocol as carrier sense. Before a station transmits, it "listens" to the medium to determine if another station is transmitting. If the medium is quiet, the station recognizes that this is an appropriate time to transmit.

Collision Detection
Carrier-sense multiple access gives us a good start in regulating our conversation, but there is one scenario we still need to address. Let’s go back to our dinner table analogy and imagine that there is a momentary lull in the conversation. You and I both have something we would like to add, and we both "sense the carrier" based on the silence, so we begin speaking at approximately the same time. In Ethernet terminology, a collision occurs when we both spoke at once.

In our conversation, we can handle this situation gracefully. We both hear the other speak at the same time we are speaking, so we can stop to give the other person a chance to go on. Ethernet nodes also listen to the medium while they transmit to ensure that they are the only station transmitting at that time. If the stations hear their own transmission returning in a garbled form, as would happen if some other station had begun to transmit its own message at the same time, then they know that a collision occurred. A single Ethernet segment is sometimes called a collision domain because no two stations on the segment can transmit at the same time without causing a collision. When stations detect a collision, they cease transmission, wait a random amount of time, and attempt to transmit when they again detect silence on the medium.

The random pause and retry is an important part of the protocol. If two stations collide when transmitting once, then both will need to transmit again. At the next appropriate chance to transmit, both stations involved with the previous collision will have data ready to transmit. If they transmitted again at the first opportunity, they would most likely collide again and again indefinitely. Instead, the random delay makes it unlikely that any two stations will collide more than a few times in a row.

Limitations of Ethernet
A single shared cable can serve as the basis for a complete Ethernet network, which is what we discussed above. However, there are practical limits to the size of our Ethernet network in this case. A primary concern is the length of the shared cable.
Electrical signals propagate along a cable very quickly, but they weaken as they travel, and electrical interference from neighboring devices (fluorescent lights, for example) can scramble the signal. A network cable must be short enough that devices at opposite ends can receive each other's signals clearly and with minimal delay. This places a distance limitation on the maximum separation between two devices (called the network diameter) on an Ethernet network.

Additionally, since in CSMA/CD only a single device can transmit at a given time, there are practical limits to the number of devices that can coexist in a single network. Attach too many devices to one shared segment and contention for the medium will increase. Every device may have to wait an inordinately long time before getting a chance to transmit.
Engineers have developed a number of network devices that alleviate these difficulties. Many of these devices are not specific to Ethernet, but play roles in other network technologies as well.

Repeaters
The first popular Ethernet medium was a copper coaxial cable known as "thicknet." The maximum length of a thicknet cable was 500 meters. In large building or campus environments, a 500-meter cable could not always reach every network device. A repeater addresses this problem.

Repeaters connect multiple Ethernet segments, listening to each segment and repeating the signal heard on one segment onto every other segment connected to the repeater. By running multiple cables and joining them with repeaters, you can significantly increase your network diameter.

Segmentation
In our dinner table analogy, we had only a few people at a table carrying out the conversation, so restricting ourselves to a single speaker at any given time was not a significant barrier to communication. But what if there were many people at the table and only one were allowed to speak at any given time?

In practice, we know that the analogy breaks down in circumstances such as these. With larger groups of people, it is common for several different conversations to occur simultaneously. If only one person in a crowded room or at a banquet dinner were able to speak at any time, many people would get frustrated waiting for a chance to talk. For humans, the problem is self-correcting: Voices only carry so far, and the ear is adept at picking out a particular conversation from the surrounding noise. This makes it easy for us to have many small groups at a party converse in the same room; but network cables carry signals quickly and efficiently over long distances, so this natural segregation of conversations does not occur.

Ethernet networks faced congestion problems as they increased in size. If a large number of stations connected to the same segment and each generated a sizable amount of traffic, many stations may attempt to transmit whenever there was an opportunity. Under these circumstances, collisions would become more frequent and could begin to choke out successful transmissions, which could take inordinately large amounts of time to complete. One way to reduce congestion would be to split a single segment into multiple segments, thus creating multiple collision domains. This solution creates a different problem, as now these now separate segments are not able to share information with each other.

Bridges
To alleviate problems with segmentation, Ethernet networks implemented bridges. Bridges connect two or more network segments, increasing the network diameter as a repeater does, but bridges also help regulate traffic. They can send and receive transmissions just like any other node, but they do not function the same as a normal node. The bridge does not originate any traffic of its own; like a repeater, it only echoes what it hears from other stations. (That last statement is not entirely accurate: Bridges do create a special Ethernet frame that allows them to communicate with other bridges, but that is outside the scope of this article.)

Remember how the multiple access and shared medium of Ethernet meant that every station on the wire received every transmission, whether it was the intended recipient or not? Bridges make use of this feature to relay traffic between segments. In the figure above, the bridge connects segments 1 and 2. If station A or B were to transmit, the bridge would also receive the transmission on segment 1. How should the bridge respond to this traffic? It could automatically transmit the frame onto segment 2, like a repeater, but that would not relieve congestion, as the network would behave like one long segment.

One goal of the bridge is to reduce unnecessary traffic on both segments. It does this by examining the destination address of the frame before deciding how to handle it. If the destination address is that of station A or B, then there is no need for the frame to appear on segment 2. In this case, the bridge does nothing. We can say that the bridge filters or drops the frame. If the destination address is that of station C or D, or if it is the broadcast address, then the bridge will transmit, or forward the frame on to segment 2. By forwarding packets, the bridge allows any of the four devices in the figure to communicate. Additionally, by filtering packets when appropriate, the bridge makes it possible for station A to transmit to station B at the same time that station C transmits to station D, allowing two conversations to occur simultaneously!

Switches are the modern counterparts of bridges, functionally equivalent but offering a dedicated segment for every node on the network (more on switches later in the article).

Routers: Logical Segmentation
Bridges can reduce congestion by allowing multiple conversations to occur on different segments simultaneously, but they have their limits in segmenting traffic as well.
An important characteristic of bridges is that they forward Ethernet broadcasts to all connected segments. This behavior is necessary, as Ethernet broadcasts are destined for every node on the network, but it can pose problems for bridged networks that grow too large. When a large number of stations broadcast on a bridged network, congestion can be as bad as if all those devices were on a single segment.

Routers are advanced networking components that can divide a single network into two logically separate networks. While Ethernet broadcasts cross bridges in their search to find every node on the network, they do not cross routers, because the router forms a logical boundary for the network.

Routers operate based on protocols that are independent of the specific networking technology, like Ethernet or token ring (we'll discuss token ring later). This allows routers to easily interconnect various network technologies, both local and wide area, and has led to their widespread deployment in connecting devices around the world as part of the global Internet.
See How Routers Work for a detailed discussion of this technology.

Switched Ethernet
Modern Ethernet implementations often look nothing like their historical counterparts. Where long runs of coaxial cable provided attachments for multiple stations in legacy Ethernet, modern Ethernet networks use twisted pair wiring or fiber optics to connect stations in a radial pattern. Where legacy Ethernet networks transmitted data at 10 megabits per second (Mbps), modern networks can operate at 100 or even 1,000 Mbps!

Perhaps the most striking advancement in contemporary Ethernet networks is the use of switched Ethernet. Switched networks replace the shared medium of legacy Ethernet with a dedicated segment for each station. These segments connect to a switch, which acts much like an Ethernet bridge, but can connect many of these single station segments. Some switches today can support hundreds of dedicated segments. Since the only devices on the segments are the switch and the end station, the switch picks up every transmission before it reaches another node. The switch then forwards the frame over the appropriate segment, just like a bridge, but since any segment contains only a single node, the frame only reaches the intended recipient. This allows many conversations to occur simultaneously on a switched network. (See How LAN Switches work to learn more about switching technology.)

Full-duplex Ethernet
Ethernet switching gave rise to another advancement, full-duplex Ethernet. Full-duplex is a data communications term that refers to the ability to send and receive data at the same time.
Legacy Ethernet is half-duplex, meaning information can move in only one direction at a time. In a totally switched network, nodes only communicate with the switch and never directly with each other. Switched networks also employ either twisted pair or fiber optic cabling, both of which use separate conductors for sending and receiving data. In this type of environment, Ethernet stations can forgo the collision detection process and transmit at will, since they are the only potential devices that can access the medium. This allows end stations to transmit to the switch at the same time that the switch transmits to them, achieving a collision-free environment.

Ethernet or 802.3?
You may have heard the term 802.3 used in place of or in conjunction with the term Ethernet. "Ethernet" originally referred to a networking implementation standardized by Digital, Intel and Xerox. (For this reason, it is also known as the DIX standard.)
In February 1980, the Institute of Electrical and Electronics Engineers, or IEEE (pronounced "I triple E"), created a committee to standardize network technologies. The IEEE titled this the 802 working group, named after the year and month of its formation. Subcommittees of the 802 working group separately addressed different aspects of networking. The IEEE distinguished each subcommittee by numbering it 802.X, with X representing a unique number for each subcommittee. The 802.3 group standardized the operation of a CSMA/CD network that was functionally equivalent to the DIX Ethernet.

Ethernet and 802.3 differ slightly in their terminology and the data format for their frames, but are in most respects identical. Today, the term Ethernet refers generically to both the DIX Ethernet implementation and the IEEE 802.3 standard.

Token Ring
The most common local area network alternative to Ethernet is a network technology developed by IBM, called token ring. Where Ethernet relies on the random gaps between transmissions to regulate access to the medium, token ring implements a strict, orderly access method. A token-ring network arranges nodes in a logical ring, as shown below. The nodes forward frames in one direction around the ring, removing a frame when it has circled the ring once.
1. The ring initializes by creating a token, which is a special type of frame that gives a station permission to transmit.
2. The token circles the ring like any frame until it encounters a station that wishes to transmit data.
3. This station then "captures" the token by replacing the token frame with a data-carrying frame, which encircles the network.
4. Once that data frame returns to the transmitting station, that station removes the data frame, creates a new token and forwards that token on to the next node in the ring.

Token-ring nodes do not look for a carrier signal or listen for collisions; the presence of the token frame provides assurance that the station can transmit a data frame without fear of another station interrupting. Because a station transmits only a single data frame before passing the token along, each station on the ring will get a turn to communicate in a deterministic and fair manner. Token-ring networks typically transmit data at either 4 or 16 Mbps.

Fiber-distributed data interface (FDDI) is another token-passing technology that operates over a pair of fiber optic rings, with each ring passing a token in opposite directions. FDDI networks offered transmission speeds of 100 Mbps, which initially made them quite popular for high-speed networking. With the advent of 100-Mbps Ethernet, which is cheaper and easier to administer, FDDI has waned in popularity.

Asynchronous transfer mode
A final network technology that bears mentioning is asynchronous transfer mode, or ATM. ATM networks blur the line between local and wide area networking, being able to attach many different devices with high reliability and at high speeds, even across the country. ATM networks are suitable for carrying not only data, but voice and video traffic as well, making them versatile and expandable. While ATM has not gained acceptance as rapidly as originally predicted, it is nonetheless a solid network technology for the future.

Ethernet’s popularity continues to grow. With almost 30 years of industry acceptance, the standard is well known and well understood, which makes configuration and troubleshooting easier. As other technologies advanced, Ethernet has evolved to keep pace, increasing in speed and functionality.

For more information on Ethernet and other networking technologies, check out the links on the next page.



Taken from
www.computer.howstuffworks.com/ethernet19.htm

RAM To Computer

Up to a point, adding RAM (random access memory) will normally cause your computer to feel faster on certain types of operations. RAM is important because of an operating system component called the virtual memory manager (VMM).
When you run a program such as a word processor or an Internet browser, the microprocessor in your computer pulls the executable file off the hard disk and loads it into RAM. In the case of a big program like Microsoft Word or Excel, the EXE consumes many megabytes. The microprocessor also pulls in a number of shared DLLs (dynamic link libraries) -- shared pieces of code used by multiple applications. The DLLs take many more megabytes. Then the microprocessor loads in the data files you want to look at, which might total several megabytes if you are looking at several documents or browsing a page with a lot of graphics. So a big application can easily take 100 megabytes of RAM or more.
On my machine, at any given time I might have the following applications running:
· A word processor
· A spreadsheet
· A DOS prompt
· An e-mail program
· A drawing program
· Three or four browser windows
· A fax program
· A Telnet session

Besides all of those applications, the operating system itself is taking up a good bit of space. Everything together may need more RAM than your machine has. Where does all the extra RAM space come from?

The extra space is created by the virtual memory manager. The VMM looks at RAM and finds sections of RAM that are not currently needed. It puts these sections of RAM in a place called the swap file on the hard disk. For example, even though I have my e-mail program open, I haven't looked at e-mail in the last 45 minutes. So the VMM moves all of the bytes making up the e-mail program's EXE, DLLs and data out to the hard disk. That is called swapping out the program. The next time I click on the e-mail program, the VMM will swap in all of its bytes from the hard disk, and probably swap something else out in the process. Because the hard disk is slow relative to RAM, the act of swapping things in and out causes a noticeable delay.

If you have a very small amount of RAM (say, 256 megabytes), then the VMM is always swapping things in and out to get anything done. In that case, your computer feels like it is crawling. As you add more RAM, you get to a point where you only notice the swapping when you load a new program or change windows. If you were to put 2 gigabytes of RAM in your computer, the VMM would have plenty of room and you would never see it swapping anything. That is as fast as things get. If you then added more RAM, it would have no effect.

Some applications (things like Photoshop, many compilers, most film editing and animation packages) need tons of RAM to do their job. If you run them on a machine with too little RAM, they swap constantly and run very slowly. You can get a huge speed boost by adding enough RAM to eliminate the swapping. Programs like these may run 10 to 50 times faster once they have enough RAM!



Taken from
http://www.computer.howstuffworks.com/question175.htm

Power Supplies

If there is any one component that is absolutely vital to the operation of a computer, it is the power supply. Without it, a computer is just an inert box full of plastic and metal. The power supply converts the alternating current (AC) line from your home to the direct current (DC) needed by the personal computer. In this article, we'll learn how PC power supplies work and what the wattage ratings mean.

Power SupplyIn a personal computer (PC), the power supply is the metal box usually found in a corner of the case. The power supply is visible from the back of many systems because it contains the power-cord receptacle and the cooling fan.
Power supplies, often referred to as "switching power supplies", use switcher technology to convert the AC input to lower DC voltages. The typical voltages supplied are:
· 3.3 volts
· 5 volts
· 12 volts

The 3.3- and 5-volts are typically used by digital circuits, while the 12-volt is used to run motors in disk drives and fans. The main specification of a power supply is in watts. A watt is the product of the voltage in volts and the current in amperes or amps. If you have been around PCs for many years, you probably remember that the original PCs had large red toggle switches that had a good bit of heft to them. When you turned the PC on or off, you knew you were doing it. These switches actually controlled the flow of 120 volt power to the power supply.

Today you turn on the power with a little push button, and you turn off the machine with a menu option. These capabilities were added to standard power supplies several years ago. The operating system can send a signal to the power supply to tell it to turn off. The push button sends a 5-volt signal to the power supply to tell it when to turn on. The power supply also has a circuit that supplies 5 volts, called VSB for "standby voltage" even when it is officially "off", so that the button will work.

Switcher Technology
Prior to 1980 or so, power supplies tended to be heavy and bulky. They used large, heavy transformers and huge capacitors (some as large as soda cans) to convert line voltage at 120 volts and 60 hertz into 5 volts and 12 volts DC.

The switching power supplies used today are much smaller and lighter. They convert the 60-Hertz (Hz, or cycles per second) current to a much higher frequency, meaning more cycles per second. This conversion enables a small, lightweight transformer in the power supply to do the actual voltage step-down from 110 volts (or 220 in certain countries) to the voltage needed by the particular computer component. The higher-frequency AC current provided by a switcher supply is also easier to rectify and filter compared to the original 60-Hz AC line voltage, reducing the variances in voltage for the sensitive electronic components in the computer.

A switcher power supply draws only the power it needs from the AC line. The typical voltages and current provided by a power supply are shown on the label on a power supply.
Switcher technology is also used to make AC from DC, as found in many of the automobile power inverters used to run AC appliances in an automobile and in uninterruptible power supplies. Switcher technology in automotive power inverters changes the direct current from the auto battery into alternating current. The transformer uses alternating current to make the transformer in the inverter step the voltage up to that of household appliances (120 VAC).

Power Supply Standardization
Over time, there have been at least six different standard power supplies for personal computers. Recently, the industry has settled on using ATX-based power supplies. ATX is an industry specification that means the power supply has the physical characteristics to fit a standard ATX case and the electrical characteristics to work with an ATX motherboard.
PC power-supply cables use standardized, keyed connectors that make it difficult to connect the wrong ones. Also, fan manufacturers often use the same connectors as the power cables for disk drives, allowing a fan to easily obtain the 12 volts it needs. Color-coded wires and industry standard connectors make it possible for the consumer to have many choices for a replacement power supply.

Advanced Power ManagementAdvanced Power Management (APM) offers a set of five different states that your system can be in. It was developed by Microsoft and Intel for PC users who wish to conserve power. Each system component, including the operating system, basic input/output system (BIOS), motherboard and attached devices all need to be APM-compliant to be able to use this feature. Should you wish to disable APM because you suspect it is using up system resources or causing a conflict, the best way to do this is in the BIOS. That way, the operating system won't try to reinstall it, which could happen if it were disabled only in the software.

Power Supply Wattage
A 400-watt switching power supply will not necessarily use more power than a 250-watt supply. A larger supply may be needed if you use every available slot on the motherboard or every available drive bay in the personal computer case. It is not a good idea to have a 250-watt supply if you have 250 watts total in devices, since the supply should not be loaded to 100 percent of its capacity.

According to PC Power & Cooling, Inc., some power consumption values (in watts) for common items in a personal computer are:
PC Item
Watts
Accelerated Graphics Port (AGP) card
20 to 30W
Peripheral Component Interconnect (PCI) card
5W
small computer system interface (SCSI) PCI card
20 to 25W
floppy disk drive
5W
network interface card
4W
50X CD-ROM drive
10 to 25W
RAM
10W per 128M
5200 RPM Integrated Drive Electronics (IDE) hard disk drive
5 to 11W
7200 RPM IDE hard disk drive
5 to 15W
Motherboard (without CPU or RAM)
20 to 30W
550 MHz Pentium III
30W
733 MHz Pentium III
23.5W
300 MHz Celeron
18W
600 MHz Athlon
45W
Power supplies of the same form factor ("form factor" refers to the actual shape of the motherboard) are typically differentiated by the wattage they supply and the length of the warranty.

Power Supply Problems
The PC power supply is probably the most failure-prone item in a personal computer. It heats and cools each time it is used and receives the first in-rush of AC current when the PC is switched on. Typically, a stalled cooling fan is a predictor of a power supply failure due to subsequent overheated components. All devices in a PC receive their DC power via the power supply.

A typical failure of a PC power supply is often noticed as a burning smell just before the computer shuts down. Another problem could be the failure of the vital cooling fan, which allows components in the power supply to overheat. Failure symptoms include random rebooting or failure in Windows for no apparent reason.
For any problems you suspect to be the fault of the power supply, use the documentation that came with your computer. If you have ever removed the case from your personal computer to add an adapter card or memory, you can change a power supply. Make sure you remove the power cord first, since voltages are present even though your computer is off.

Power Supply Improvements
Recent motherboard and chipset improvements permit the user to monitor the revolutions per minute (RPM) of the power supply fan via BIOS and a Windows application supplied by the motherboard manufacturer. New designs offer fan control so that the fan only runs the speed needed, depending on cooling needs.

Recent designs in Web servers include power supplies that offer a spare supply that can be exchanged while the other power supply is in use. Some new computers, particularly those designed for use as servers, provide redundant power supplies. This means that there are two or more power supplies in the system, with one providing power and the other acting as a backup. The backup supply immediately takes over in the event of a failure by the primary supply. Then, the primary supply can be exchanged while the other power supply is in use.



Taken from
www.computer.howstuffworks.com/power-supply.htm

Computer Viruses

Strange as it may sound, the computer virus is something of an Information Age marvel. On one hand, viruses show us how vulnerable we are -- a properly engineered virus can have a devastating effect, disrupting productivity and doing billions of dollars in damages. On the other hand, they show us how sophisticated and interconnected human beings have become.
For example, experts estimate that the Mydoom worm infected approximately a quarter-million computers in a single day in January 2004. Back in March 1999, the Melissa virus was so powerful that it forced Microsoft and a number of other very large companies to completely turn off their e-mail systems until the virus could be contained. The ILOVEYOU virus in 2000 had a similarly devastating effect. In January 2007, a worm called Storm appeared -- by October, experts believed up to 50 million computers were infected. That's pretty impressive when you consider that many viruses are incredibly simple.
­
When you listen to the news, you hear about many different forms of electronic infection. The most common are:
· Viruses - A virus is a small piece of software that piggybacks on real programs. For example, a virus might attach itself to a program such as a spreadsheet program. Each time the spreadsheet program runs, the virus runs, too, and it has the chance to reproduce (by attaching to other programs) or wreak havoc.

· E-mail viruses - An e-mail virus travels as an attachment to e-mail messages, and usually replicates itself by automatically mailing itself to dozens of people in the victim's e-mail address book. Some e-mail viruses don't even require a double-click -- they launch when you view the infected message in the preview pane of your e-mail software [source: Johnson].

· Trojan horses - A Trojan horse is simply a computer program. The program claims to do one thing (it may claim to be a game) but instead does damage when you run it (it may erase your hard disk). Trojan horses have no way to replicate automatically.

· Worms - A worm is a small piece of software that uses computer networks and security holes to replicate itself. A copy of the worm scans the network for another machine that has a specific security hole. It copies itself to the new machine using the security hole, and then starts replicating from there, as well.

Virus Origins
Computer viruses are called viruses because they share some of the traits of biological viruses. A computer virus passes from computer to computer like a biological virus passes from person to person. Unlike a cell, a virus has no way to reproduce by itself. Instead, a biological virus must inject its DNA into a cell. The viral DNA then uses the cell's existing machinery to reproduce itself. In some cases, the cell fills with new viral particles until it bursts, releasing the virus. In other cases, the new virus particles bud off the cell one at a time, and the cell remains alive.

A computer virus shares some of these traits. A computer virus must piggyback on top of some other program or document in order to launch. Once it is running, it can infect other programs or documents. Obviously, the analogy between computer and biological viruses stretches things a bit, but there are enough similarities that the name sticks.

People write computer viruses. A person has to write the code, test it to make sure it spreads properly and then release it. A person also designs the virus's attack phase, whether it's a silly message or the destruction of a hard disk. Why do they do it?
There are at least three reasons. The first is the same psychology that drives vandals and arsonists. Why would someone want to break a window on someone's car, paint signs on buildings or burn down a beautiful forest? For some people, that seems to be a thrill. If that sort of person knows computer programming, then he or she may funnel energy into the creation of destructive viruses.

The second reason has to do with the thrill of watching things blow up. Some people have a fascination with things like explosions and car wrecks. When you were growing up, there might have been a kid in your neighborhood who learned how to make gunpowder. And that kid probably built bigger and bigger bombs until he either got bored or did some serious damage to himself. Creating a virus is a little like that -- it creates a bomb inside a computer, and the more computers that get infected the more "fun" the explosion.

The third reason involves bragging rights, or the thrill of doing it. Sort of like Mount Everest -- the mountain is there, so someone is compelled to climb it. If you are a certain type of programmer who sees a security hole that could be exploited, you might simply be compelled to exploit the hole yourself before someone else beats you to it.

Of course, most virus creators seem to miss the point that they cause real damage to real people with their creations. Destroying everything on a person's hard disk is real damage. Forcing a large company to waste thousands of hours cleaning up after a virus is real damage. Even a silly message is real damage because someone has to waste time getting rid of it. For this reason, the legal system is getting much harsher in punishing the people who create viruses.

Patch Tuesday
On the second Tuesday of every month, Microsoft releases a list of known vulnerabilities in the Windows operating system. The company issues patches for those security holes at the same time, which is why the day is known as "Patch Tuesday." Viruses written and launched on Patch Tuesday to hit unpatched systems are known as "zero-day" attacks. Thankfully, the major anti-virus vendors work with Microsoft to identify holes ahead of time, so if you keep your software up to date and patch your system promptly, you shouldn't have to worry about zero-day problems.

Virus History
Traditional computer viruses were first widely seen in the late 1980s, and they came about because of several factors. The first factor was the spread of personal computers (PCs). Prior to the 1980s, home computers were nearly non-existent or they were toys. Real computers were rare, and they were locked away for use by "experts." During the 1980s, real computers started to spread to businesses and homes because of the popularity of the IBM PC (released in 1982) and the Apple Macintosh (released in 1984). By the late 1980s, PCs were widespread in businesses, homes and college campuses.

The second factor was the use of computer bulletin boards. People could dial up a bulletin board with a modem and download programs of all types. Games were extremely popular, and so were simple word processors, spreadsheets and other productivity software. Bulletin boards led to the precursor of the virus known as the Trojan horse. A Trojan horse is a program with a cool-sounding name and description. So you download it. When you run the program, however, it does something uncool like erasing your disk. You think you are getting a neat game, but it wipes out your system. Trojan horses only hit a small number of people because they are quickly discovered, the infected programs are removed and word of the danger spreads among users.
The third factor that led to the creation of viruses was the floppy disk. In the 1980s, programs were small, and you could fit the entire operating system, a few programs and some documents onto a floppy disk or two. Many computers did not have hard disks, so when you turned on your machine it would load the operating system and everything else from the floppy disk. Virus authors took advantage of this to create the first self-replicating programs.

Early viruses were pieces of code attached to a common program like a popular game or a popular word processor. A person might download an infected game from a bulletin board and run it. A virus like this is a small piece of code embedded in a larger, legitimate program. When the user runs the legitimate program, the virus loads itself into memory and looks
Floppy disks were factors in the distribution of computer viruses.

around to see if it can find any other programs on the disk. If it can find one, it modifies the program to add the virus's code into the program. Then the virus launches the "real program." The user really has no way to know that the virus ever ran. Unfortunately, the virus has now reproduced itself, so two programs are infected. The next time the user launches either of those programs, they infect other programs, and the cycle continues.

If one of the infected programs is given to another person on a floppy disk, or if it is uploaded to a bulletin board, then other programs get infected. This is how the virus spreads.
The spreading part is the infection phase of the virus. Viruses wouldn't be so violently despised if all they did was replicate themselves. Most viruses also have a destructive attack phase where they do damage. Some sort of trigger will activate the attack phase, and the virus will then do something -- anything from printing a silly message on the screen to erasing all of your data. The trigger might be a specific date, the number of times the virus has been replicated or something similar.

Virus Evolution
As virus creators became more sophisticated, they learned new tricks. One important trick was the ability to load viruses into memory so they could keep running in the background as long as the computer remained on. This gave viruses a much more effective way to replicate themselves. Another trick was the ability to infect the boot sector on floppy disks and hard disks. The boot sector is a small program that is the first part of the operating system that the computer loads. It contains a tiny program that tells the computer how to load the rest of the operating system. By putting its code in the boot sector, a virus can guarantee it is executed. It can load itself into memory immediately and run whenever the computer is on. Boot sector viruses can infect the boot sector of any floppy disk inserted in the machine, and on college campuses, where lots of people share machines, they could spread like wildfire.

In general, neither executable nor boot sector viruses are very threatening any longer. The first reason for the decline has been the huge size of today's programs. Nearly every program you buy today comes on a compact disc. Compact discs (CDs) cannot be modified, and that makes viral infection of a CD unlikely, unless the manufacturer permits a virus to be burned onto the CD during production. The programs are so big that the only easy way to move them around is to buy the CD. People certainly can't carry applications around on floppy disks like they did in the 1980s, when floppies full of programs were traded like baseball cards. Boot sector viruses have also declined because operating systems now protect the boot sector.

Infection from boot sector viruses and executable viruses is still possible. Even so, it is a lot harder, and these viruses don't spread nearly as quickly as they once did. Call it "shrinking habitat," if you want to use a biological analogy. The environment of floppy disks, small programs and weak operating systems made these viruses possible in the 1980s, but that environmental niche has been largely eliminated by huge executables, unchangeable CDs and better operating system safeguards.
E-mail viruses are probably the most familiar to you. We'll look at some in the next section.

E-mail Viruses
Virus authors adapted to the changing computing environment by creating the e-mail virus. For example, the Melissa virus in March 1999 was spectacular. Melissa spread in Microsoft Word documents sent via e-mail, and it worked like this:
Someone created the virus as a Word document and uploaded it to an Internet newsgroup. Anyone who downloaded the document and opened it would trigger the virus. The virus would then send the document (and therefore itself) in an e-mail message to the first 50 people in the person's address book. The e-mail message contained a friendly note that included the person's name, so the recipient would open the document, thinking it was harmless. The virus would then create 50 new messages from the recipient's machine. At that rate, the Melissa virus quickly became the fastest-spreading virus anyone had seen at the time. As mentioned earlier, it forced a number of large companies to shut down their e-mail systems.

The ILOVEYOU virus, which appeared on May 4, 2000, was even simpler. It contained a piece of code as an attachment. People who double-clicked on the attachment launched the code. It then sent copies of itself to everyone in the victim's address book and started corrupting files on the victim's machine. This is as simple as a virus can get. It is really more of a Trojan horse distributed by e-mail than it is a virus.

The Melissa virus took advantage of the programming language built into Microsoft Word called VBA, or Visual Basic for Applications. It is a complete programming language and it can be programmed to do things like modify files and send e-mail messages. It also has a useful but dangerous auto-execute feature. A programmer can insert a program into a document that runs instantly whenever the document is opened. This is how the Melissa virus was programmed. Anyone who opened a document infected with Melissa would immediately activate the virus. It would send the 50 e-mails, and then infect a central file called NORMAL.DOT so that any file saved later would also contain the virus. It created a huge mess.

Microsoft applications have a feature called Macro Virus Protection built into them to prevent this sort of virus. With Macro Virus Protection turned on (the default option is ON), the auto-execute feature is disabled. So when a document tries to auto-execute viral code, a dialog pops up warning the user. Unfortunately, many people don't know what macros or macro viruses are, and when they see the dialog they ignore it, so the virus runs anyway. Many other people turn off the protection mechanism. So the Melissa virus spread despite the safeguards in place to prevent it.

In the case of the ILOVEYOU virus, the whole thing was human-powered. If a person double-clicked on the program that came as an attachment, then the program ran and did its thing. What fueled this virus was the human willingness to double-click on the executable.

Phishing and Social Engineering
While you may be taking steps to protect your computer from becoming infected by a virus, you may very well run into another, more insidious type of attack. Phishing and other social engineering attacks have been on the rise. Social engineering is a fancy term for someone trying to get you to give up your personal information -- online or in person -- so they can use it to steal from you. Anti-spam traps may catch e-mail messages coming from phishers, but the U.S. Computer Emergency Readiness Team says the best way for you to beat them at their own game is to be wary. And never give out your personal or financial information online.
Now that we've covered e-mail viruses, let's take a look at worms.

Worms
A worm is a computer program that has the ability to copy itself from machine to machine. Worms use up computer time and network bandwidth when they replicate, and often carry payloads that do considerable damage. A worm called Code Red made huge headlines in 2001. Experts predicted that this worm could clog the Internet so effectively that things would completely grind to a halt.

A worm usually exploits some sort of security hole in a piece of software or the operating system. For example, the Slammer worm (which caused mayhem in January 2003) exploited a hole in Microsoft's SQL server. "Wired" magazine took a fascinating look inside Slammer's tiny (376 byte) program.

Worms normally move around and infect other machines through computer networks. Using a network, a worm can expand from a single copy incredibly quickly. The Code Red worm replicated itself more than 250,000 times in approximately nine hours on July 19, 2001 [Source: Rhodes].

The Code Red worm slowed down Internet traffic when it began to replicate itself, but not nearly as badly as predicted. Each copy of the worm scanned the Internet for Windows NT or Windows 2000 servers that did not have the Microsoft security patch installed. Each time it found an unsecured server, the worm copied itself to that server. The new copy then scanned for other servers to infect. Depending on the number of unsecured servers, a worm could conceivably create hundreds of thousands of copies.

The Code Red worm had instructions to do three things:
· Replicate itself for the first 20 days of each month
· Replace Web pages on infected servers with a page featuring the message "Hacked by Chinese"
· Launch a concerted attack on the White House Web site in an attempt to overwhelm it [Source: eEye Digital Security]
Upon successful infection, Code Red would wait for the appointed hour and connect to the www.whitehouse.gov domain. This attack would consist of the infected systems simultaneously sending 100 connections to port 80 of www.whitehouse.gov (198.137.240.91).

The U.S. government changed the IP address of www.whitehouse.gov to circumvent that particular threat from the worm and issued a general warning about the worm, advising users of Windows NT or Windows 2000 Web servers to make sure they installed the security patch.

A worm called Storm, which showed up in 2007, immediately started making a name for itself. Storm uses social engineering techniques to trick users into loading the worm on their computers. So far, it's working -- experts believe between one million and 50 million computers have been infected [source: Schneier].
When the worm is launched, it opens a back door into the computer, adds the infected machine to a botnet and installs code that hides itself. The botnets are small peer-to-peer groups rather than a larger, more easily identified network. Experts think the people controlling Storm rent out their micro-botnets to deliver spam or adware, or for denial-of-service attacks on Web sites.

How to Protect Your Computer from Viruses
You can protect yourself against viruses with a few simple steps:
· If you are truly worried about traditional (as opposed to e-mail) viruses, you should be running a more secure operating system like UNIX. You never hear about viruses on these operating systems because the security features keep viruses (and unwanted human visitors) away from your hard disk.
· If you are using an unsecured operating system, then buying virus protection software is a nice safeguard.
· If you simply avoid programs from unknown sources (like the Internet), and instead stick with commercial software purchased on CDs, you eliminate almost all of the risk from traditional viruses.
· You should make sure that Macro Virus Protection is enabled in all Microsoft applications, and you should NEVER run macros in a document unless you know what they do. There is seldom a good reason to add macros to a document, so avoiding all macros is a great policy.
· You should never double-click on an e-mail attachment that contains an executable. Attachments that come in as Word files (.DOC), spreadsheets (.XLS), images (.GIF), etc., are data files and they can do no damage (noting the macro virus problem in Word and Excel documents mentioned above). However, some viruses can now come in through .JPG graphic file attachments. A file with an extension like EXE, COM or VBS is an executable, and an executable can do any sort of damage it wants. Once you run it, you have given it permission to do anything on your machine. The only defense is never to run executables that arrive via e-mail.
Open the Options dialog from the Tools menu in Microsoft Word and make sure that Macro Virus Protection is enabled. Newer versions of Word allow you to customize the level of macro protection you use.

By following these simple steps, you can remain virus-free.
For more information on computer viruses and related topics, see the links on the next page.
An Anti-Virus Virus?
As we've discussed, worms attack known vulnerabilities in computer operating systems. Someone came up with the idea of turning worm tech around and created a variation of the MSBlast worm that would automatically patch the hole in the operating system and send itself out to other computers to do the same. Sounds like a good idea, right? Not so fast. MSBlast.D, Nachi or Welchia, as it was known, turned out to be more trouble than good. As it multiplied and scanned corporate networks for the vulnerability, it clogged network traffic[source: Lemos].



Taken from
www.computer.howstuffworks.com/virus.htm

Remote Computer Access

There's a whole class of remote computer access programs. The major players on the market are Symantec's pcAnywhere, Netopia's Timbuktu and Lap Link’s LapLink Gold. A new player taking a unique, Web-based approach to remote computer access is Expert city’s GoToMyPC.

Unlike other remote access software which requires that special programs are installed on both machines, GoToMyPC allows you to remotely view and control your home or office PC from any Web browser on another Windows, Mac, Linux, Unix or Solaris computer without having to install anything on the dialing end.

This feature of GoToMyPC frees travelers from having to lug a laptop everywhere. You only have to prepare your host computer in advance, and you can access and use it from hotels, airports, satellite offices, Internet cafes - anywhere with Web access.

To gain that kind of freedom with other remote computer access software, you'd have to carry the CD with your favorite connection program and install it on each computer you use, which is problematic at Internet terminals.

GoToMyPC's other advantage is simplicity - it takes only 2 minutes to install on the host PC. There's nothing to configure and almost no preferences to change. When you want to remotely control the target PCs, you just log on to your GoToMyPC account from a Web browser, select your host, and type in the password.

The screen of the target PC appears as if you were sitting in front of it. You can run programs, access your email, documents, network resources, and, additionally, transfer files back and forth, print from your remote computer to a local printer and invite others to share your PC to collaborate or do demos.

Since its approach is completely different from that of other remote computer access software, GoToMyPC is independent from network protocols and settings, dynamic IP addresses, name servers, and can work fine through many corporate firewalls.

The main disadvantage is that GoToMyPC depends on Web connectivity. Unless the host has an always-on connection to the Internet, such as a LAN, DSL, or cable, GoToMyPC simply won't work. If your host isn't always online, look for another remote access software.

The service runs only as fast as your Net connection. It can function via a dial-up modem connection on the client side - the typing and viewing lag is noticeable, but transferring files is sufficiently fast.

How it works
1) Register at GoToMyPC, download a small program on the host PC, install it and assign a password to that machine. Setup takes only 2 minutes. The program works in the background and there's nothing to configure at all.

2) When you want to remotely control the host PC, you just login to your GoToMyPC's account from any computer with a Java-enabled browser, specify the PC you want to control, type in the password and begin working as if you were sitting in front of your host computer.

Security
GoToMyPC uses AES 128-bit encryption to protect data stream, file transfers, chat, keyboard and mouse input. Two passwords are required - one to log onto the service and another to gain access to the target PC. The second password resides on the host computer and is never transmitted or stored on GoToMyPC servers. If three straight login attempts fail, an account is deactivated for 5 minutes.

You have an option to blank out the screen and disable the keyboard and mouse of the host PC as soon as a remote connects to it, ensuring that anyone around the host can't see what you're doing.

Unlike other remote access software, your host with installed GoToMyPC is invisible to hackers. It doesn't constantly monitor incoming connections, but instead pings the GoToMyPC servers every 5 seconds to see if an access request has come in. This process is completely different from all other remote computer access programs.

The opposite side of connection is also secure. When the session is over, no digital marks of your work remain on the machine you used to connect to your host - great when you're using a public computer.



Taken from
www.buildwebsite4u.com

PC Troubleshooting

Here you will learn about the computer troubleshooting tips, how to fix hardware errors, networking how to, and learn how to fix the software errors. Troubleshooting the computer problems is a very vital role of the system administrators, hardware technicians and system specialists. Every hardware component in the computer system has its own configurations methods and troubleshooting techniques. If you use a computer at your home or in office, this guide will be wonderful help for you in diagnostic and troubleshooting your basic computer problems.

There are some basic techniques and you should be aware of them. If you encounter a slow boot up problem, there are some basic tips and by implementing these tricks you can increase the speed of your computer.

How to Speed up the Computer
1. Windows Defragmenter utility: You can use this utility by using this path Start > Programs > Accessories > System tools > Disk defragmenter. This is built-in utility in Windows operating systems and this will automatically analyze the empty disk space on the hard disk and defragment and all scattered files.

2. Shutdown Unnecessary programs: By using this path Start > Run > Msconfig > Startup, you can close all unwanted programs at the Windows startup and this will automatically increase the windows loading time and speed of the computer will also be increased.

3. Increase RAM: By increasing the RAM in your system you can get exceptional fast speed of your computer.

4. Disk Cleanup: By using the disk clean up utility Start > Programs > Accessories > System tools > Disk clean up utility, you can delete unwanted programs and files from your computer and this will be helpful in increasing the speed.

5. Empty Recycle Bin: When you delete a file or a folder form your computer it first goes to the recycle bin and it covers the disk area of your C drive. It is imperative, to delete all the files and folders from the recycle bin and you will get the space of C drive for reusability.

6. Delete Temporary Files: Delete the temporary files and cookies from your computer to get the better speed. You can do this like Internet explorer > Tools > Internet options > Generals > Settings > View files > here you can delete all the temporary internet files.

How to Troubleshoot the Computer?
Here you will learn the basic troubleshooting methods of your computer.
1. Trial and error: When you find a faulty component in your computer, first of you check it with the other computers so that you can make sure whether the fault is in the component or not.

2. Check cables: In case of any device failure, check all the cables of your computer such as data cables, power cable, internal circuitry cables etc and make sure that all the cables are plugged in and working fine.

3. Hardware settings: Check the hardware settings in the CMOS and in the device manager of the system and make all the device drivers and up to date and all the cards are plugged in properly.

4. Notice changes: When you notice a software or hardware error in your computer, determine what was changed before the problem occurred.

5. Event viewer: Use the event viewer utility by going to Start > Control panel > Administrative tools > Event viewer. In the event viewer you will find the error or warning messages associated with any faulty hardware or software.

6. Make notes: Troubleshooting is big learning option and we can learn a lot when we face any kind of troubleshooting in our computer. Make notes including the error messages and their solutions, so that you have a record that how a certain problem occurred and how did you solve it.

Data Recovery Tips
Accidental loss or deletion of the critical data of your organization can cause big problems for you and for your company. If you are a system administrator or a hardware technician and responsible for your company’s data, it is your duty to equip yourself with the great system restore and data recovery utilities and if you are empty handed and you encounter such problems, there can be big financial loss for your company in case of completely removal of data and wastage of precious time.
Following are the few tips for recovering the lost data.
1. Use some good data recovery utilities such as File recovery, Recovery My Files, R-tt and a free utility Handy Recovery.

2. If you are responsible for the data and system administrations, use backup tapes drives and regularly take backups of your server’s data.

3. Use UPS and diesel generators if power failure occurs regularly in your area because sudden shutdown can crash your server and other systems.

4. Make a clean humid and dust free environment for your server room.

Diagnostics with Beep Codes
Following are the beep messages associated with the IBP bios.
1 short beep specifies a normal post.
2 short beeps tells about POST errors that can be find on screen. Continuous beeps indicates power supply and other cards errors.One long and short beep indicates system board problems.
3 long beeps defines keyboard errorsNo system beep tells about power supply errors.



www.networktutorials.info

Networking Tutorials

Find computer network tutorials, wireless communication guide, LAN/WAN guide, local area network tools, wan introduction, OSI layers model and many other advance topics of data communication. This is very informative site for the IT people specially in the field of computer networking. You will also find data communication overview, tech guides , data communication related information, topologies, tech study guide, Router Labs, IT certifications, Ethernet guide, free IT resources, IP addressing tools, telecommunication guide and many other informative resources. Data communication is a process of sharing data and shared resources between two or more connected computers. The shared resources can include printer, Fax modem, Hard disk, CD/DVD Rom, Database and the data files.

A computer network can be divided into a small segments called Local Area Network (LAN), networking between computers in a building of a office, medium sized networks (MAN), communication between two offices in a city and wide area networks (WAN) networking between the computers, one is locally placed and the other can be thousands of miles away in another city or another country in the world.

WAN connectivity is achieved by a device known as “Router”. The internet is the world’s largest WAN, where millions of computers from all over the globe and connected with each other.
Networking is the practice of linking two or more computers or devices with each other. The connectivity can be wired or wireless. A computer network can be categorized in different ways, depends on the geographical area as mentioned above.

There are two main types of the computer networking client-server and peer to peer. In the client server computing, a computer plays a major role known as server, where the files, data in the form of web pages, docs or spread sheet files, video, database & resources are placed.
All the other computers in the client/server networks are called clients and they get the data from the server. In the peer to peer networks all the computers play the same role and no computer act as a centralized server. In the major businesses around the world client-server networks model is in major use.

A network topology defines the structure, design or layout of a network. There are different topologies like bus, ring, star, mesh, hybrid etc. The star topology is most commonly used topology. In the star topology, all the computers in the network are connected with a centralized device such as hub or switch. Thus forms a star like structure. If the hubs/switch fails to work for any reason then all the connectivity and communication between the computers will be halted.

A common communication language is used by the computers and the communication devices is known as protocols. The most commonly used and popular protocol on the internet and in the home and other networks is called TCP/IP. TCP/IP is not a single protocol but it is a suite of several protocols. A computer network can be a wired or wireless and TCP/IP protocol can work both in types of a network. Data flow or communication can be divided into seven logical layers called OSI layers model that was developed by Intel and Xerox Corporation and was standardized by ISO.
1. Application layer
2. Presentation layer
3. Session layer
4. Transport layer
5. Network layer
6. Data Link layer
a. Media access control sub-layer
b. Logical link control sub-layer
7. Physical layer
A network can be divided into different scales and ranges and it depends on the requirement of the organization and the geographical location. Computer Network can be divided into Local Area Network, Personal Area Network, Campus Area Network, Wireless Local Area Network, Metropolitan Area Network and Wide Area Network.There are several communication connection methods like HomePNA, Power line communication, Ethernet and Wifi connection method.A network can also be categorized into several different types based on the services it provides like Server farms, Storage area networks, Value control networks, Value-Added networks, SOHO networks, Wireless and Jungle networks.I have explained the basic and advance data communication technologies, Router commands, communication devices, certifications, IP addressing basics, sub netting, networking tips, networking interview questions, windows networking, mobile technology and wireless computing. I have tried to cover all the hot topics in the area of computer networking so that maximum users can get benefit from this website.

I will warmly welcome any suggestions, tips and unique content from my visitors if they want to share with me. Before working on this website, I personally visited many networking related websites to updated my knowledge to provide most up-to-date, unique information and best experience to my visitors.

I have tried to add all the good features, how tos and networking related utilities in my website. I am hopeful that my visitors will like this website and find it very useful for the basic and advance data communication topics.


www.networktutorials.info

Build a Computer

Have you ever thought about building your own computer? Actually buying a motherboard and a case along with all the supporting components and assembling the whole thing yourself?
Here are three reasons why you might want to consider taking the plunge:
1. You will be able to create a custom machine that exactly matches your needs.
2. It will be much easier to upgrade your machine in the future because you will understand it completely.
3. You may be able to save some money.
And, if you have never done it before, you will definitely learn a lot about computers.
­ In this article, we'll take you through the entire process of building a computer. You'll learn how to choose the parts you will use, how to buy them and how to put them all together. When you're done, you will have exactly the machine that you need. Let's get started.

Decisions
Where do we start? Actually putting the machine together is pretty easy, but picking the parts and buying them takes research.
The first step in building a computer is deciding what type of machine you want to build. Do you want a really inexpensive computer for the kids to use? A small, quiet machine to use as a media computer in the living room? A high-end gaming computer? Or maybe you need a powerful machine with a lot of disk space for video editing. The possibilities are endless, and the type of machine you want to build will control many of the decisions you make down the line. Therefore, it is important to know exactly what you want the machine to accomplish from the start.

Let's imagine that you want to build a powerful video editing computer. You want it to have a dual-core CPU, lots of RAM and a terabyte of disk space. You also want to have FireWire connectors on the motherboard. These requirements are going to cause you to look for a motherboard that supports:
· Dual-core CPUs (either Intel or AMD)
· At least 4GB of high-speed RAM
· Four (or more) SATA hard drives
· FireWire connections (possibly in both the front and back of the case)
Then it all needs to go in a case with enough space to hold multiple hard disks and enough air flow to keep everything cool.
With any computer you build, knowing the type of machine you want to create can really help with decision-making.

The Motherboard
Choosing a motherboard is the most interesting part of any building project. The reason it is so interesting is because there are hundreds of motherboards to choose from and each has its own advantages and disadvantages.

One easy way to think about motherboards is to break them up into a few categories. For example:
· Cheap motherboards: Generally in the $50 range, these are motherboards for older CPUs. They are great for building inexpensive machines.
· Middle-of-the-road motherboards: Ranging in price from $50 to $100, these are one step up from the cheap motherboards. In many cases you can find motherboard and CPU combos in this price range, which is another great way to build a cheap machine or an inexpensive home/office computer.

A middle-of-the-road motherboard
· High-end motherboards: If you are building a powerful gaming machine or video workstation, these motherboards give you the speed you need. They range in price from $100 to $200. They handle the latest CPU chips at their highest speeds.
· Extreme motherboards: Falling into the over-$200 range, these motherboards have special features that boost the price. For example, they might have multiple CPU sockets, extra memory slots or special cooling features.
You need to decide whether you are building a "cheap machine," a "high-end machine" or a "tricked-out super machine" and then choose your motherboard accordingly. Here are some other decisions that help narrow down your motherboard choices:
Image courtesy Intel Corporation
· Do you want to use an Intel or an AMD processor? Making this choice will cut the number of motherboards in half. AMD chips are often cheaper, but lots of people are die-hard Intel fans.
· What size motherboard do you want to use? If you are trying to build a smaller computer, you may want to look at micro ATX cases. That means you will need to buy a micro ATX motherboard. Otherwise you can use a normal ATX motherboard and case. (There are also smaller motherboard form factors like mini-ITX and even nano-ITX if you want to go really small.)
· How many USB ports do you want? If you want several, make sure the motherboard can handle it.
· Do you need FireWire? It's nice if the motherboard handles it (although it is also possible to add a card).
· Do you want an AGP or PCI Express graphics card? Or do you want to use a graphics card on the motherboard to keep the price and size down? If you want to go the cheapest route, make sure the motherboard includes a video card on-board (easiest way to tell is to see if there is a DVI or VGA connector on the motherboard). PCI Express is the latest/greatest thing, but if you want to re-use an AGP card you already own, that might be a reason to go with AGP.
· Do you want to use PATA (aka IDE) or SATA hard disks? SATA is the latest thing, and the cables are much smaller.
· What pin configuration are you using for the CPU? If you want to use the latest CPUs, make sure that your motherboard will accept them.
· Do you want to try things like dual video cards or special high-speed RAM configurations? If so, make sure the motherboard supports it.

If you don't care about any of this stuff (or if it all sounds like gibberish to you), then you're probably interested in building a cheap machine. In that case, find an inexpensive motherboard/CPU combo kit and don't worry about all of these details.

Buying Parts
Once you have chosen your motherboard, you are ready to choose everything else. Here's what you need to get:
· The CPU that's the right brand and the right pin configuration to fit your motherboard. Pick whichever CPU clock speed fits your budget and intentions. (If you purchase a motherboard/CPU combo, you can skip this step.)
· The RAM with the correct pin configuration that will match your motherboard. If your motherboard is using a specialty RAM configuration (normally to improve performance), make sure the RAM you buy matches its requirements.
· If the case does not come with a power supply, you'll need to choose one. Make sure its connectors match the motherboard. Three hundred watts are enough for most machines, but if you are building a gaming machine with multiple video cards or a machine with lots of disks, you may want to consider something bigger.
· Choose a video card if you are not using the onboard video on the motherboard. Make sure the card's connector is appropriate for the motherboard (AGP or PCI Express).
A basic AGP-based graphics card
· Choose an optical drive. If you are building a cheap machine, get the cheapest CD-ROM drive you can find. If you want to burn DVDs and CDs, make sure the drive can handle it.
· Choose a hard disk, making sure that it matches the PATA/SATA status of your motherboard. · Choose an operating system: Windows XP (which comes in home, professional and media center editions) or Linux in its hundreds of different forms.

­ BuyingNow that you have picked everything out, it is time to purchase your parts. You have three options:
· Mail order on the Internet - All kinds of stores sell computer parts on the Web. Visit a place like HowStuffWorks Shopper to compare prices. Don't forget about eBay.
· A big national chain - Places like Tiger Direct, Fry's, and CompUSA have stores in most large cities that will sell you parts. They also have people on staff who may be able to answer questions.
· local parts retailer - Any big city will have a number of smaller, local shops selling parts. Look in the Yellow Pages or online. I live in Raleigh, N.C., and a typical shop of this genre in Raleigh is called Intrex. The people working at a shop like this can often answer lots of questions, and they may also be willing to help you if your machine does not work after you assemble it.
Now that you have your parts, it is time to build. This is the fun part.

Building
But before we start building, we need to say one thing about static electricity. Most of the parts you will be handling when you assemble your computer are highly sensitive to static shocks. What that means is that if you build up static electricity on your body and a shock passes from your body to something like a CPU chip, that CPU chip is dead. You will have to buy another one.
The way you eliminate static electricity is by grounding yourself. There are lots of ways to ground yourself, but probably the easiest is to wear a grounding bracelet on your wrist. Then you connect the bracelet to something grounded (like a copper pipe or the center screw on a wall outlet's face plate). By connecting yourself to ground, you eliminate the possibility of static shock.
Each combination of parts is unique. But in general, here are the basic steps you will need to follow when you assemble your machine.
Installing the Microprocessor and RAMFirst, you'll need to unwrap the motherboard and the microprocessor chip. The chip will have one marked corner that aligns with another marked corner of its socket on the motherboard. Align the corners and drop the microprocessor into the socket. You don't need to apply any pressure - if it's aligned correctly, it should fall into place. Once you have it in, cinch it down with the lever arm.

Now, you need to install the heat sink. The CPU box will contain a manual that tells you how to do it. The heat sink will contain either a heat sink sticker or heat sink grease to use when mounting the heat sink on the CPU. Follow the instructions closely to install it. To install our heat sink, all we had to do was put it in place, cinch it down with flanges on either side and lock it with a cam. Connect the power lead for the heat sink to the motherboard.

Next, you'll install the RAM. Look on the motherboard for the slot marked "one" and firmly press the RAM module into it. It will probably take more pressure than you'd think to get the RAM into place. Each side of the module should also have a rotating arm that will lock the RAM down.
Now your motherboard is ready to put in the case.

Assembling the Case
Next, you'll assemble the case. You'll need to install the power supply, the motherboard, a faceplate and standoffs to hold the motherboard in place. You'll also need to connect some wires to the motherboard.

Your motherboard should have come with a face plate for its back connectors. The case already has a hole cut in it for the plate, so you just need to put in the plate and press it until it clicks into place. Now you can put in in the motherboard. It needs to sit about a quarter of an inch away from the case's surface so that none of its connectors touch the case. You'll accomplish this by placing spacers, which are also included with the motherboard.

Because each motherboard is different, you'll have to set it into the case first to see which screw holes on the motherboard match up with the pre-drilled holes in the case. Then you can take the motherboard back out, place the spacers, and put the motherboard in on top of them. Make sure that the motherboard lines up with the faceplate and the holes line up with the spacers.
Find the screws that fit (these should have come with the case) the spacers and screw down the motherboard. Don't screw them in too tightly -- they just need to be in snugly. Be very careful when putting in the screws. If you drop them into the case, they could damage the fine wires on the motherboard.

Installing the power supply.
Now you can install the power supply in the case if it's not already installed. The power supply has two sides. The fan side faces outside the case and the wire side faces inside. Slide the power supply onto its brackets and secure it with screws (the case or the power supply should have come with them).

Connect the power leads to the motherboard. There should be a large one and a small one, and it will be obvious as to where each one goes.
You'll be left with about 15 more wires. Don't worry -- the manual has a page to tell you exactly where each one goes. Each of them has a label that corresponds to a label on the correct port.

Installing the Hard Drive
The last steps are installing the hard drive and the CD-ROM drive. The case has a removable bracket with four rubber grommets on it, which line up with four holes on the hard drive. It also came with four screws made just to punch through those grommets. Screw the hard drive into the bracket, then put the bracket back into its slot in the case. If you are using IDE/PATA drives, be sure to set the jumpers correctly. Then connect the hard disk to the power using one of the connectors coming off of the power supply. If it fits, then it's a match.

Now install the cables. One side of the cable has a red stripe on it, which makes it " pin 1." Look on the motherboard and hook the cable into the IDE connector marked "1." Insert the other end of the cable on the back of the drive. Now the drive is ready to go.
Install the CD-ROM drive next. Again, set the jumpers correctly. The drive fits in the front of the case, and you may have to pop out a faceplate to make room for it. Slide it in and screw it into place, making sure that it's aligned with the front of the case. Just as with the hard drive, you can use any available connector from the power supply. You'll also use the cable that came with the CD-ROM drive to connect it to the motherboard (align the red stripe for "pin 1") and plug the other end into the drive. Connect the audio for the CD drive. Again, there's an obvious place for it to plug in on the motherboard and on the drive itself.

Placing the hard drive into its bracket.
If you're using a video card, now you'll install it as well. Our motherboard has an AGP video slot so we have an AGP video card. The motherboard only has one video card slot, so you should be able to find it easily (you can also use the manual). Line up the card with the slot and push it into place. If the video card has its own power connector, connect it to the power supply. If the case has extra fans, make sure they have power too.
Now you can close up the case and add a monitor, keyboard, mouse and speakers. In the next section, we'll cover what to do after powering up the computer and what steps to follow if it doesn't work.

Powering Up and Troubleshooting
Now, the moment of truth -- it's time to turn your machine on and see if it works. If there's a switch on the back of the power supply, make sure it is on. Also make sure that the power supply is set correctly to 110 or 220 volts (some power supplies do this automatically, others have a switch or a slider).
Then push the power switch on the front of the case. In the ideal case, four things will happen:
· You will see/hear the fans spin up
· You will hear the hard disk spin up.
· Lights will light on the case.
· You will see something happening on the monitor to indicate that the motherboard is alive.

If you see/hear all of that happening, you are successful. You have created a working machine. Using the manual that came with the motherboard you can enter the BIOS screens and make sure everything looks OK. Chances are you will need to set the machine's date/time, but that is probably all you have to do. Everything else is probably automatic. All the drives will be recognized and auto-configured. The default settings on the motherboard will be fine.
The next step is to install the operating system. And presto, you have a working machine of your own creation.

Congratulations!
Troubleshooting What if you put it all together and it doesn't work? This is the one possible downside of building your own machine. It is hard to describe the feeling you get when you try turning on the machine and nothing happens. You have put in several hours of work and a significant amount of cash, so it's discouraging to get no response.

All is not lost, however. Here are several items to check:
· Is the power supply firmly plugged in and turned on (many power supplies have a small switch on the back)? Try a different outlet.
· Did you plug the power supply into the motherboard? Look at the manual for details.
Make sure that your motherboard is connected to thepower supply.
· Is the case's power switch properly connected to the motherboard? If you have plugged the switch into the wrong pins on the motherboard, it will not work. Check the motherboard manual. · Are the drives connected to the motherboard properly? Do they have power?
· Unseat and reseat the video card. If the motherboard has onboard video, try to remove the video card completely and boot using the onboard version.

If you have checked all of that and nothing continues to happen, it could mean:
· The power supply is bad
· The switch on the case doesn't work. We actually had this happen once on a machine we built at HowStuffWorks.
· Something is wrong with the motherboard or the CPU.

The easiest way to determine where the problem lies is to swap parts. Try a different power supply. Swap a different motherboard into the case. Play around with different combinations.
If it is still not working, then you have a few options at this point. You can go back to the shop that sold you the parts. If you bought them from a small local shop, they can help you debug the problem (although it may cost you). If they sold you a bad motherboard (rare, but possible) they will usually help you out. You can also try to find a more experienced builder who would be willing to help you. There is a rational cause for the problem you are experiencing -- either a bad part or a bad connection somewhere -- and you will find it.

Now that you've seen how simple it is to build your own computer, we hope that you'll give it a shot. You'll have a computer that you understand completely and will be easy to upgrade. You can save money, and it's a lot of fun too. So the next time you need a new computer, consider building it yourself!


www.computer.howstuffworks.com/build-a-computer.htm