Friday, August 21, 2009

5 Tools That Every Network Administrator Should Have

Every network administrator has their own set of tools that they like to use

on a daily basis to help them do their job. Here I list 5 tools I like most.

Network Analyzer - There

are actually to sniffer applications that I keep in my toolbox, WireShark and Capsa Network Analyzer. Each program can

satisfy my different needs,the difference is that Wireshark has more functionality when it comes

to filters. But Capsa Network Analyzer, from my point of view, is the user interface. It presents

the data in an extremely easy-to-read way, such that you don’t need to be a hard-core network

engineer to see what’s happening. and the pretty graphs will make me happy.

PuTTY - PuTTY is a very versatile telnet application for use when you spend a

lot of your day working on Cisco equipment. PuTTY allows a number of different ways to connect to

a piece of equipment including Raw, Telnet, Rlogin, SSH, and with the newest version of PuTTY

Serial connection. The newest Serial option becomes very handy for network administrators since

HyperTerm is no longer available with Windows Vista and you still need a serial connection for

new routers and switches. PuTTY is also very customizable and can be run from a USB drive without

installing anything onto the computer.

PumpKIN - PumpKIN is a free FTP server program that you can download and use

to host your computer as an FTP server. I use this program main for transferring Cisco images

back and forth from the switch or router to my computer. This program become very valuable when

you have a switch or router down that you need to get back up quick.

MAC Scanner Pro - Colasoft MAC Scanner Pro has some advanced features,apart from scanning MAC

addresses and IP addresses, the most pratical feature is that it allows users to export or print

the scanning results.

NetStumbler - NetStumbler was one of the first "Wardriving"

programs you could get to pick up other people's wireless networks. I use this tool on a regular

basis for the opposite reason, I want to be able to check for rouge access points on my network.

I simply use this little tool and walk around all of my offices and see what wireless devices pop

up. I have found a couple of employees who wanted to work out side or away from their office and

added a wireless AP so they could.

So those are 5 tools I believe every network administrator should have in their toolkit. For

their ease of use, small size, and versatility they made my top 5 tools.

Thursday, August 20, 2009

The 7 Most Common Mistakes Using Network Analyzers

Colasoft Capsa network analyzer

1) Over-Believing the Software's"Intelligence" without understanding how it makes determinations.

Software default settings are very seldom correct for YOU. For example, a device may say that a SQL server should respond in 50ms. But, if that device is across a WAN with a 200ms ping time--that is highly unlikely. This causes false SLOW SQL messages. This is only an example, but there are many such alerts and messages based on default "thresholds" within this type of software tool's configuration.

Particulars of your environment may create false alerts or other messages. The definitions of what is an "excessive" delay--latency--broadcasts, etc, are up to you--not the tool.

It's important for you to know the default settings driving alerts and messages. Then, ignore or alter those alerts that are not set best--for your enterprise. Altering them to make the appropriate settings for your enterprise is the best strategy. Too many false flags or alerts numb you into ignoring important ones or--cause you to make serious errors and incorrect decisions that can be Very Very expensive.

Properly used, those features can save enormous amounts of time and show things your own eye would likely miss.

2) Not understanding the Protocols used, such as TCP, HTTP, etc.

What good is a tool that tells you information about how a protocol is behaving if you do not understand the underlying technology? By this I mean the RFC's for the protocols that are relevent to your concerns.

---What is the impact of various protocols working differently for the same application doing the same transaction--in different locations?

---What is expected according to specs--and how is your trace file showing different--or less optimal behavior?

---Why would there be 2 TCP connections from one location and 10 from another--for the same application doing the same transaction?

This short article cannot answer all these questions--but it can show you the types of information that you will need to understand in order to make sense out of the data a trace file will show you. Know the protocols well. Deep understanding of TCP is the basic price of admission. While you may consider this a matter of skill sets, my point is that attempting to troubleshooting a problem with a packet-sniffer while not understanding the protocols is a mistake--and a common one. If you add this point to the first one listed--about not believing all the standard settings on tools--you find that the tool cannot answer anything for you by itself. You need to know what you are looking at. You are the analyst--the tool is just an aid.

3) Not understanding the layer 1 and layer 2 aspects of the topology you are sniffing.

Ethernet and all other topologies have many different specifications, which are altered or outright ignored by many switch or other network device manufactures. You must know the specs and how the hardware you are working with applies those specs--or doesn't apply them. A classic example is Spanning Tree. There are IEEE specifications for Spanning-Tree but those specifications are just a model...not a law. Each manufacturer has tweaked it in order to create some proprietary advancement to give them a competitive advantage. Sometimes, those advances become the new spec. However, you need to know what is standard and how your equipment varies on that theme. What good is seeing the BPDU's in a trace file if you don't understand what they contain or how it relates to the problem at hand? Again, this may be looked at as a skill set issue but--expecting to solve critical problems with a packet-sniffer while not knowing this about your network is a mistake.

4) Uni-directional SPANs or Port Mirroring & Single-sided trace files.

Often the switch port used by a server you need to monitor is incapable of providing a bi-directional SPAN (Port Mirror). If so, you cannot get answers from such a trace as it will miss critical information. It can be an oversight by the Engineer doing the trace but sometimes it is simply not understood to be such a critical concern--and ignored. Either way, when you have a situation like this you need to bite the bullet and put in a Change Order to get it moved to a fully bi-directionally mirror-able port before any serious analysis can be done.

Here is a good example of why this is so. Picture a Client and a Server. The Server wants to end a specific TCP connection and keeps sending FIN's. Yet, we never see the Client send back a FIN ACK. We do see other traffic between them and know that there is connectivity. So, here are the questions:

--Are the FINs not arriving at the Client--or--is the Client receiving them and appropriately sending back the FIN ACK--which are not getting back successfully?

----If so, then it is most likely a network issue.

--Are the FINs arriving successfully--but being ignored by the Client?

---If so, then it is mostly likely a Server or OS or Data Center issue.

These questions can not be answered with a trace file that only sees one side of the conversation. Two traces, sychronized, are needed to determine the answer to these questions.

5) Incorrect filters--either Capture or Display

An important concept here is that filters add nothing--they only remove--they only filter out. When you say that you are "filtering for" what you mean is that you are "filtering out" everything else. This isn't just semantics as understanding this perspective is critical to success.

Capture Filters:

Capture Filters are irreversible. If you filtered out something that you need to see--you just aren't going to see it. There is no second chance without running the test again.

Capture Filters determine what is allowed in the Capture Buffer. If the data is there to see--great. If you filtered what you need out--you can't change the filter after the fact. A very experienced Protocol Analyst may notice the problem by seeing anomalies that amount to the shadow of the missing data--but most will not be able to tell. And, of course, even if you can tell--you still have to re-test.

This might lead you to think that you should not use Capture Filters--and that is half true. If you don't really need them--don't use them. However, if you are drinking your packets out of the Fire Hydrant--you have no choice. Under those conditions the data will fill up your Capture Buffer is less than a single second.

Another point is that they should be consistent within a Test Design. If they vary too much, they will create false differences that can easily lead the Network and Application Performance Analyst or Protocol Analyst astray.

Monitor Filters:

Monitor Filters are forgiving. They work the same way--in that they filter out, not in. However, you can change your mind. The data is in the can (trace file) and it is only a matter of changing the filter to see what was filtered out the last time. Many times I am stumped and then have an idea--go back and change my Capture Filters--and bam! There is the answer. The point is--incorrect Monitor Filters will just as easily lead you astray--but you still have the opportunity to find your way back since the data is still there.

Again, this might leave you thinking to avoid Monitor Filters. Don't even consider it. Removing irrelevant packets is required to properly measure distinct conversations and search for anomalies. In fact, understanding proper filtering is what using the packet-sniffer software is all about.

6) Lack of understanding the Network-Analyzer's CURRENT settings.

Monday, you created a Capture Filter and left it as the default. Friday you need to capture a trace file and click on Capture. Various people perform their roles in the test and you save the trace file. Everyone goes home, back to their main job function or to bed. Then you look at it and discover that you didn't realize that the old Capture Filter was still in effect! Why? You altered the Default Capture File instead of creating a new one. Your Trace File is useless.

Always remember to review ALL settings before beginning a test. Additionally, run a practice test to make sure all filters and setting are as they should be.

Sometimes the error you discover is that you were given an incorrect IP address and that you never would find what you are looking for from the IP address from which you are capturing packets. That is a GOOD finding. It means someone's diagram is incorrect. It also means you prevented a useless round of testing.

7) Lack of test controls.

Like any proper experiment, a performance or application test requires a control group and controlled data for all groups. If it was a pharmaceutical test you might have a group with a placebo. In our field we need to create a "BESTline" first. A "Bestline" is not a baseline.

Here is an example.

You have a Client in Singapore and a Server in New York City. The client is Singapore takes 40 milliseconds to execute a transaction and European clients only need 30 milliseconds. Singapore, although farther away, has a faster connection and is expected to get it done in the same time as Europe. What now? Take a BESTline. Use a client in New York City running the same transaction in the same way on similar equipment on the same server as the other two tests. You may discover that it still takes 25 milliseconds! This may due to various issues in the Data Center, Server or PC itself, 25 milliseconds is the fastest it goes!

This means that the first 25 milliseconds have nothing to do with the transport distance or speed. It DOESN'T mean that you have to accept those 25 milliseconds. There is a great deal that can be done about it. However, it is not the network and you now know you have to focus on the Server, PC, Data Center and other components.

Such controls are easy to do--yet seldom done. That common error results in many false leads and false errors as well as lost time and money.

Wednesday, August 19, 2009

How to Discover Network Security Loopholes

There is an illusion today towards discovering the loopholes in a network as wonders of global connectivity enfold. Such diversity seems to call for the need for companies to invest more in training their network operators on discovery of Network loopholes. Simultaneously, there also exists at large sophisticated hackers and crackers, who spend sleepless nights contemplating how to accurately discover security loopholes in a network enabling them penetrate through. this call for network security managers who should have the ability to hack into their own systems first.

These few challenges are the main forces driving research on discovering network security loopholes and as technological advances emerge, the cat and mouse game continues between attacker and protectors.

The major method that is being employed in most networks today to discover security loopholes is Penetration Testing as is examined below.

Penetration Testing

This can be defined as a process of actively testing information security measures. Organisations prefer to perform penetration tests to identify the threats facing them and resolving its vulnerabilities and weakness.

There are different types of penetration tests available. They are:

i. External Penetration Testing

The oldest approach of testing and is mainly focused on servers, infrastructure and software present in the target system. This type of testing is usually either performed with no prior knowledge of the site or with total knowledge of how the network topology is.

ii. Internal Security Assessment

This approach is similar to the external penetration testing with the addition of provision of a security report of the site. This testing is typically performed from a number of access points representing the different network segments.

iii. Application Security Assessment

This identifies and asses threats to an organisation through software applications that might provide interactive access to potentially sensitive materials. It is essential that the applications are accessed to ensure that they done expose the servers and the software to attack.

iv. Telephony Security Assessment

This assessment addresses security concerns relating to corporate voice technologies.

v. Social Engineering Security Assessment

This assessment addresses social engineering which is a non technical kind of intrusion.

For more information about Penetration Testing a great website that has lots of information is .

Network Analysing

After the penetration testings, it is quite easy to detect and confirm the network problems with a network sniffer/analyzer. With the professional data capturing technology and comprehensive capability of network analyzing, Colasoft Network Analyzer will help you monitor your network within seconds and maximize your network value.

Tuesday, August 18, 2009

Are You Being Watched?

by Brett Glass --

How private is your PC data? Thanks to the proliferation of Internet worms and hardware and

software spying tools, the erosion of loyalty between corporations and their employees, and the

9/11 disaster (which has caused many to value security over privacy and civil rights), the

likelihood is greater than ever that your computer is reporting your every move to a suspicious

spouse, a government agency, an employer, or the entire world. In this article, we'll cover the

most prevalent spying hardware and software and explain how it can be used, abused, and


A hardware key logger is a device that captures keystrokes en route from keyboard to PC.

KeyGhost (, a New Zealand company, offers two hardware key loggers. The first is

an inconspicuous cable that runs from the keyboard to the PC (prices start at $139 and go up to

$409 direct). The second is a keyboard with the logging hardware tucked entirely inside the case

($189 and up). The company claims to have a wide variety of bugged keyboards ready-made to match

many brands of computers. If your existing keyboard is unique, KeyGhost will modify it and return

it with the logger hidden inside. Both the internal and external versions have maximum capacities

of about 2MB—enough memory to capture as much as a year's worth of typing. The Spy Store

( shows a more compact external key logger ($139 direct).

It has a smaller memory capacity, but its capabilities are otherwise similar.

Hardware key loggers usually can't be detected by software and may be tough for non-technical

users to spot. They're also compatible with most operating systems and don't require complicated

installations. The main drawback is that they can't capture the information that appears on the

screen but isn't typed in by the user. So hardware devices are best used to sniff out small but

vital pieces of information, such as passwords.

Although keystroke-logging hardware is relatively new, software that performs the same

function is not. In 1988, I implemented a primitive network keystroke logger as a DOS TSR, using

the NetBIOS protocol. My motivation at the time was not to spy but to ensure that my programming

work was preserved on another machine in the event of a system crash.

But today's spying programs do much more than log keystrokes. Spying software can be selective

about the data it captures; administrators can set the software to skim information and then

capture more data when certain criteria are met. WinWhatWhere Investigator

(, a major product in the monitoring market, captures keystrokes, e-mails

information about your activities when key phrases are entered, and even renames itself and

changes its location at random. If the victim's machine has a Webcam connected, WinWhatWhere

snaps pictures periodically and sends them out surreptitiously.

SpectorSoft ( makes Spector Pro, which captures screen shots, records e-

mail and chat sessions, and logs keystrokes. In short, if something of interest to you happens on

a user's machine, you will not only know what the person typed, you'll have logs of e-mail and

chat room conversations and pictures of the screen. Competing products such as D.I.R.T., from

Codex Data Systems' (, offer similar features. And several

keystroke logger programs are freely available for download from many shareware archives. Logging

software is easier to detect via system diagnostic tools, however, and may be wiped off the hard

drive by reconfiguring or reinstalling the operating system.

In some cases, spying software may be installed as a virus, worm, or Trojan horse that arrives

via e-mail or an infected file. BackOrifice, a program created by a group of rogue hackers called

The Cult of the Dead Cow, can be installed in this way and can spy on and even commandeer the

victim's system. Several recent worms, including Badtrans.B, attempt to capture passwords and

credit card information from users' systems and forward the information to the worms' creators

via e-mail or Internet relay chat (IRC).

Another spying technique uses a network sniffer (usually a computer running special software)

installed on the same LAN as the victim's computer or upstream between the victim's computer and

the Internet. The sniffer taps and records the raw data flowing between the victim and other

machines; this data can be scanned later.

Only a few Internet protocols use encryption. E-mail is most often sent and retrieved as plain

text, and the password needed to break into someone's electronic mailbox is very rarely

encrypted. If encryption is used, a key logger can often be used to discover the password that

unlocks the data.

The FBI's Carnivore system, which is installed at ISP facilities to collect evidence, is one

example of a network sniffer. Civilian tools that can sniff LAN traffic—even on networks

supposedly protected from monitoring by network switches—are widely available for free via the


Even if the party who wants to spy on you has no physical access to your network, you cannot

necessarily rest easy. A cracker who manages to gain control of any vulnerable system on your

network can set it up to sniff traffic from the rest of the network. And recently revealed bugs

in most implementations of SNMP (Simple Network Management Protocol) may provide an easy way for

intruders to take over managed hubs and switches, routers, print servers, and network appliances.

(For more on these bugs, see the CERT advisory.)

Friday, August 14, 2009

What is the difference between an Ethernet hub and switch?

Although hubs and switches both glue the PCs in a network together, a switch is more expensive and a network built with switches is generally considered faster than one built with hubs. Why?

When a hub receives a packet (chunk) of data (a frame in Ethernet lingo) at one of its ports from a PC on the network, it transmits (repeats) the packet to all of its ports and, thus, to all of the other PCs on the network. If two or more PCs on the network try to send packets at the same time a collision is said to occur. When that happens all of the PCs have to go though a routine to resolve the conflict. The process is prescribed in the Ethernet Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol. Each Ethernet Adapter has both a receiver and a transmitter. If the adapters didn't have to listen with their receivers for collisions they would be able to send data at the same time they are receiving it (full duplex). Because they have to operate at half duplex (data flows one way at a time) and a hub retransmits data from one PC to all of the PCs, the maximum bandwidth is 100 Mhz and that bandwidth is shared by all of the PC's connected to the hub. The result is when a person using a computer on a hub downloads a large file or group of files from another computer the network becomes congested. In a 10 Mhz 10Base-T network the affect is to slow the network to nearly a crawl. The affect on a small, 100 Mbps (million bits per scond), 5-port network is not as significant.xoverpin1

Two computers can be connected directly together in an Ethernet with a crossover cable. A crossover cable doesn't have a collision problem. It hardwires the Ethernet transmitter on one computer to the receiver on the other. Most 100BASE-TX Ethernet Adapters can detect when listening for collisions is not required with a process known as auto-negotiation and will operate in a full duplex mode when it is permitted. The result is a crossover cable doesn't have delays caused by collisions, data can be sent in both directions simultaneously, the maximum available bandwidth is 200 Mbps, 100 Mbps each way, and there are no other PC's with which the bandwidth must be shared.


An Ethernet switch automatically divides the network into multiple segments, acts as a high-speed, selective bridge between the segments, and supports simultaneous connections of multiple pairs of computers which don't compete with other pairs of computers for network bandwidth. It accomplishes this by maintaining a table of each destination address and its port. When the switch receives a packet, it reads the destination address from the header information in the packet, establishes a temporary connection between the source and destination ports, sends the packet on its way, and then terminates the connection.

Picture a switch as making multiple temporary crossover cable connections between pairs of computers (the cables are actually straight-thru cables; the crossover function is done inside the switch). High-speed electronics in the switch automatically connect the end of one cable (source port) from a sending computer to the end of another cable (destination port) going to the receiving computer on a per packet basis. Multiple connections like this can occur simultaneously. It's as simple as that. And like a crossover cable between two PCs, PC's on an Ethernet switch do not share the transmission media, do not experience collisions or have to listen for them, can operate in a full-duplex mode, have bandwidth as high as 200 Mbps, 100 Mbps each way, and do not share this bandwidth with other PCs on the switch. In short, a switch is "more better."


Acutally, this is a frequently asked problem in Capsa customers that why they have to deploy Capsa on hub Only? According to the info above, we can see that Switch transmit the data selectively(by the source of MAC address), while Hub is send the data to every ports randomly. So, we have to install Capsa on the Hub to capture the data in the network.

Understandings Network Management and Network Monitoring

Network management may mean different things to different people. To some network management may be a network consultant monitoring network activity with Network analyzer(Colasoft Capsa Network Analyzer), to others network management may be about distributed database, high-end workstations generating and traffic. Speaking generally, network management is a service, which uses a wide range of devices, tools, and applications, to enable the network managers to monitor and maintain networks successfully & efficiently.

Network management deals with the top-level administration and maintenance of widespread and large networks, commonly seen in the field of computers or telecommunications, which may be necessarily, include user terminal equipment.

Network management executes functions such as security, control, allocation, monitoring, coordination, deployment and planning to name a few. It is also worth noting that network management is governed by a several protocols which are basically present there for its support, including SNMP, Common Information Model, CMIP, WBEM, Transaction Language 1, Java Management Extensions, and Netconf.

Routing is also an important area of network management. Routing refers to the process of selecting the paths in a computer network on which to send data. In this arena of network management, logically addressed packets get transported from their source to their destination with the help of nodes. These nodes are called routers, in a process termed as forwarding.

Successful network management also uses accounting management. This controls and reports on the financial status of the network. This area of network management involves bank account maintenance, financial statement development, and analysis of cash flow and financial health.

Coming to Network Monitoring, it is about policing network traffic. In other words, network monitoring is spying for the benefit of smooth working of network management. Network monitoring is part of network management. Ideally network monitoring is a function that one of your systems must perform on an ongoing basis. While the other systems are performing the functions assigned to them, one should set aside at least one computer to monitor network activity. This is network monitoring in a nutshell.

The computer performing network monitoring must be kept always on. Which means that network monitoring system should have exclusive power lines or, backup generator facility. Everyone should understand that network-monitoring system is the most critical part of any network, because it is with the help of network monitoring that that the alarm will be sent if something is wrong.

Network monitoring will identify the slow or failing systems and notify the network administrator of such lapses. Issues like overloaded systems, crashing of servers, network connections being lost, virus infections, and power outages will be dealt without losing time if network monitoring is in place.

How to Protect Your Network from Spam?

According to the July 2009 edition of the MessageLabs Intelligence Report,Spam remains a major

problem, In fact, it has reached up to 90%, some European countries are higher, up to 95%

Three main problems caused the bad situation.

  • The use of automated tools: Spammers are used to use automated tools to

    generate email addresses based on domain name.

  • URL-shortening spam: Currently, many social networking offers URL-shortening services to

    users, 6.2% spamming emails contains shortened URLs to mask unsafe destinations.

  • International problem: Unlike we thought the souces of spam emails are outside United

    States, According to the static of July, at least, 86% of all e-mails sent in the US are


Be a network administrator,what can we do to mitigate the effect of spam?

Well, there are two specific network methods you may take.

Traffic management

You'd better to install a network analyzer like Colasoft Capsa network analyzer in your network, that will

help you monitor network traffic especially SMTP traffic we more care

about in this article in real time,Traffic management entails reducing overall message volume by

relying on techniques that are implemented at the protocol level. Essentially,

unwanted senders are identified and their connections dramatically throttled using features that

are inherent to the TCP protocol. This allows incoming volumes of spam to be

slowed, allowing legitimate mail an opportunity to be processed and expedited by the mail


This technique is obviously effective, but it is nevertheless useful to reduce the effect of

a DOS-style of e-mail flooding.

Connection management

Another method would be the use of connection management techniques. An example would be for

incoming SMTP connections from sources known for sending spam and malware to be immediately

rejected. The use of such blacklists can be done at the firewall level and could also include

open proxies or known botnets.

The obvious benefit of connection management is that mail servers do not even have to waste

processor cycles to deal with the incoming spam.

Do you have else methords? let's share our knowledge here!