Monday, August 2, 2010

Will the Cisco Cius Make Desktop Video Conferencing Mainstream?

With the Cisco Cius product launch, the age of desktop video conferencing may well be emerging. The Cisco Cius is an ultra-portable, mobile collaboration business tablet that offers access to essential business applications and technologies.

The early adaptors are likely to find making video conferencing calls within an organization extremely easy and become hooked on the experience.

To read the full post, please visit the new location of our blog and start following us there. We recently revamped our website and have moved locations. We look forward to you visiting.

Friday, June 18, 2010

The iPad in Action

I wrote a post some time ago about the potential of the iPad to help manage networks. Since then, I’ve grown much more excited about its potential as a business tool.

The iPad is such a powerful tool for IT administrators because it is much more convenient than a netbook. It is lighter and thinner, can be used instantly and has a large display area – a powerful combination for remote networking.

I’ve been using the iPad myself with the new version of dopplerVUE 2.1 and its web client capabilities. The display is perfect for the powerful visualization features of dopplerVUE. I can clearly display and monitor network status and performance by combining maps, charts and gauges into a single dashboard. If you want to give it a try for yourself, download the 30 day free trial version.

The iPad is a great tool while you’re on the go. Some tasks are always going to be much easier to address with the computer on your desk. But in my mind there is definitely a place for the iPad in terms of network management. If you’re using the iPad for IT management, it would be great to hear about your experience.

Friday, June 11, 2010

A Network Management Resource...

I came across a network management website that I thought was worth sharing. See what you think.


Network Management Software is a source of news, analysis and reviews of the IT network management space. The website is solely focused on the network management space and offers great tips on the basics of network management and the fundamentals on application monitoring.

Also keep in mind that dopplerVUE recently added application centers for Exchange, IIS and SQL in its new 2.1 release and is available for download.

Friday, June 4, 2010

3 Keys to Achieving Optimal IIS Performance…

Monitoring the Microsoft Internet Information Server (IIS) is something I take pretty seriously. As one of the most relied upon and mission-critical web infrastructures for eCommerce websites, web applications, intranet portals and corporate websites – it is definitely a mission critical application and there will be some crazed phone calls if the service goes down.


I monitor IIS for two main reasons - to troubleshoot performance problems on the server and to improve server performance. When I’m able to optimize server performance and save money by reducing costs on additional servers and hardware it is always a big plus.

Here are three steps for achieving optimal performance (for additional details read this article):

1. Monitor Memory and CPU Usage
It is critical to monitor memory and CPU usage and to take any steps necessary to reduce the load on the server. Other processes operating on the server could be using memory and CPU resources needed by IIS. If this is the case, stop non-essential services and move support applications to a different server.

2. Resolve Hardware Issues that Cause Problems
Slow disk drives can delay file reads, if that is the case improve the disk input/output (I/O). Also install additional network cards, if the current ones are fully optimized to ensure you can perform critical activities such as back-ups.

3. Optimize Web Pages and Applications on IIS
Make sure to test web pages and IIS applications to ensure the source code executes as expected. Take the time to eliminate unnecessary procedures and optimize inefficient processes.

To fully optimize IIS, you have to do some testing and go through some trial and error until you get everything tuned properly. It is definitely worth the time.

If you don’t want to use a bunch of disparate tools to monitor such a critical server and application, try out dopplerVUE 2.1 for a 30 day free trial. dopplerVUE’s IIS application center lets you display system responsiveness, application services, server and application utilization and alarm conditions all in a single window.

Friday, May 28, 2010

Why Keep Using IE6? 5 Reasons Some People Have Not Upgraded

Network world has an interesting article about how Microsoft is pushing hard to get everybody off of IE6. They describe some really good reasons to upgrade. It got me thinking about why people still have the IE6 browser at all?


With a bit of research, here are the top reasons why so many people still have IE6:

1. Certain commercial apps do not support newer versions of IE without major upgrades. With funds for maintenance and upgrades slashed during the recession, it may be very difficult to obtain the necessary dollars to get the latest version of vendor software that supports new versions of IE.

2. Some internal apps do not support newer versions of IE. IE 6 offered a proprietary API that is not the same as current versions. If your development team has moved on or did not upgrade, the necessary knowledge of how to upgrade your application may now be missing.

3. IE6 uses less RAM then later versions – installing a new browser version may require you to upgrade the hardware or sacrifice the performance of other more critical applications. This cost factor encourages some to delay the upgrade.

4. Long refresh cycles – some industries do not refresh their technology until about 5 to 7 years of usage. Those of us in technology live around it and want the latest and greatest but, not everyone needs the most advanced technology immediately.

5. This does leave out the groups of people who simply have not upgraded because they don’t care to do any updates and those who ignore all new browser versions for various reasons.

If you haven’t made the leap, consider evaluating your situation and creating an upgrade plan that gets you off of IE6 before 2014 when Microsoft stops supporting XP and IE6. After that, no more security patches for new vulnerabilities.

Friday, May 21, 2010

Four Resource Bottlenecks to Monitor in SQL Server 2008 for Better Performance

Looking to improve Microsoft SQL Server performance? I've found that resource bottlenecks are often the the most common issues with SQL Server 2008 performance. You can monitor SQL Server performance with a range of tools built into the server. In my experience four main culprits are often the key to finding, monitoring and resolving SQL Server 2008 performance issues.

1. CPU Bottlenecks
Monitoring the CPU load can identify systems that are over worked. Generally, when a processor sustains a rate above 80% the condition should be evaluated and the usage reduced. While you can buy more hardware, you should also look at the queries consuming the most load and attempt to optimize CPU consumption.

Metrics to monitor:
Processor:% Processor Time: Sustained above 80% indicates a problem

2. Memory Bottlenecks
There are multiple ways in SQL server and the base OS to use or reserve memory. It is important to monitor the overall physical and virtual memory to ensure it is not fully allocated. When the memory is fully utilized, your system works harder to move items around and is less efficient, resulting in a slower system.

Metrics to monitor:
Memory: Available MBytes: less than 50-100 likely indicates a problem but, you may need to see how your local system responds in relationship to the available memory for a more precise number
Monitor the windows event log for errors that indicate the virtual memory has run low

3. Disk I/O Constraints
The SQL server reads and writes to the database on a regular basis. A slow response during processing can result in decreased SQL performance. Improving the disk I/O with hardware is one solution, but you should also ensure that memory problems are not making the problem worse. In addition, consider data compression strategies and review query plans for missing indexes with the database tuning advisor to improve performance.

Metrics to monitor:
PhysicalDisk Object: Avg. Disk Queue: When operating regularly above 2 this indicates an I/O bottleneck
Avg. Disk Sec/Read & Avg. Disk Sec/write: Less than 20ms is normally fine, but beyond 30 is likely to cause slowdowns.
Physical Disk: %Disk Time: Numbers above 50% indicate an I/O bottleneck

4. TempDB Issues
The tempDB provides a storage place for objects, tables and stored procedures. The tempDB can affect both performance and disk space usage which can reduce the efficiency of the SQL Server and any other applications running on the same server.

Metrics to monitor:
Space used: Ensure this does not exceed 80% utilization.
Free Space in tempdb: Monitor and evaluate the proper levels for baseline operations.

I've found that monitoring these four common resource issues can help troubleshoot and resolve many common SQL Server 2008 bottlenecks. If you don't have time to use a range of tools to monitor all these metrics, consider a solution that provides an integrated view of all the SQL Server metrics that you need. dopplerVUE is a network management solution with an SQL application center that displays system responsiveness, application services, server and application utilization and alarm conditions all in a single window.

Friday, May 14, 2010

Cisco Tech Days – Don’t Miss Out

Want to get the inside scoop on network technology straight from the network experts? Cisco is hosting its Tech Days series throughout the month of May and into June in several cities throughout the country. I’ve attended the sessions in the past and found them to be very useful. Viewing demos, hearing about product roadmaps and features from the horse’s mouth is very helpful.


The hot topics this year include borderless networking, virtualization and collaboration technologies. Find out how the newest innovations can help you develop strategies and deploy solutions to make your network more efficient and effective.

Get the full details here for all the locations and dates. If you’ll be attending the event on May 26th in McLean, VA or the June 9th event in San Francisco, CA let me know. It would be great to meet up and discuss some of these latest networking trends.

Thursday, May 6, 2010

Application Problems and Downtime – Avoid the Pain

Whether it’s the email system, web infrastructure or your database backend – downtime can make your blood pressure rise. I’ve seen the support calls come in when a server and application goes down that is mission critical.


That’s why I’m so excited about the new release of dopplerVUE 2.1. Application centers have been added for Microsoft® Exchange, Microsoft® IIS and Microsoft SQL Server® applications. These monitoring centers provide detailed information about the overall health of an application, including its hardware and software dependencies.

dopplerVUE displays responsiveness, application services, server and application utilization and alarm conditions in a single view. Take a look at the screenshot below.













Interested in getting more insight into your Exchange, IIS or SQL server and applications? Try
dopplerVUE 2.1 free for 30 days.

Thursday, April 29, 2010

7 Key Considerations for Managing Exchange Server Health and Status

In my experience application problems are often a factor in downtime making monitoring mission critical. Trying to resolve application performance issues is no easy task. Troubleshooting requires that you test and validate the different application layers and dependencies like the network, server and application performance to determine the cause of the problem and solve it. For applications like Exchange server, there can be a fair number of items to evaluate. Here are some of the items you want to evaluate when troubleshooting Exchange Server or monitoring its health and status.











Application Monitoring in dopplerVUE 2.1 Available Soon
In early May the new release of dopplerVUE 2.1 will be available with application monitoring capabilities. In a single window it will display system responsiveness, application services, server and application utilization and alarm conditions. An early preview is provided below with a screenshot. The screenshot shows Exchange monitoring displaying average delivery time in milliseconds and queue size at zero with more metrics in the same window.
Be on the look out for an email letting you know more details about the release and its availability.

Wednesday, April 28, 2010

Interop 2010 - Video Conferencing and Blazing Fast Switches

It’s that time of year again. I’m in Vegas for Interop reviewing all the new technology and emerging trends. Thought I’d share a few insights from the show.


This year, I’m struck by the number of video conferencing vendors that are at the show. The quality of the technology is incredible and both the service and the equipment are getting more cost-effective. If your company has multiple offices it’s definitely worth considering. On the tech side, there are new blazing fast switches that have been built to keep up with the increased load from the usage of video on networks. Very impressive stuff.

The best booth so far has to be Xirrus, primarly because of the live boxing (picture below). They did a great job getting me to pay attention to their pitch and not mind it one bit. Huge crowds and well it was fun. Kudos to that marketing team!

Friday, April 16, 2010

The Changing Face of the Network: The User and IT Perspective

Is 2010 going to be a year of huge change for the network? With the increasing use of video conferencing, web 2.0 and other emerging technologies, will users demand that the network and IT better support these performance hungry services?


Loudhouse Research, a UK based firm thinks so based on a survey of 152 IT decision makers in companies with 1000+ employees. The trend would tie the business to the network and the network engineer to the user more closely than ever before. The key question – is the network ready for these services? The user and IT perspective on the challenges with the network today can be viewed in the chart below.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
An interesting difference of opinion is in terms of performance which is at the heart of the network and new services.


I agree with the trend that Loudhouse points out is coming, but I’m not as sure about the timeline. What do you think? Do you see these changes in your organization?

We do know that the network is constantly changing, so it’s always better to be prepared. In my experience I’ve found it to be helpful to use tools, such as dopplerVUE to manage the network to make sure users are satisfied with performance and that IT is aligned with the goals of the business. There is a free 30 day download available if you’re interested in trying it for yourself.

Tuesday, April 6, 2010

Network Management Becomes Critical for Smart Residences…

I’m sure you hear it all the time – we are becoming a more network centric world. Is this just hype? I don’t think so and a recent customer example, I think proves the point.

Used to be only really large organizations worried about network uptime and cyber security. These days the network is so critical to revenue and productivity that mid-sized and smaller organizations are facing the same issues.

Now the network is even vital in high-end residences. As more residential networks pop-up and data networks are joined by home security and advanced audio video systems -security and uptime become a much more serious issue.

A great example is Certified Cyber Solutions (CCS) a company that provides a product SAM (Secure Access Manager) that helps installers and resellers of residential systems that employ IP networks, such as audio/video, home security and “smart home” systems to protect their customers from cyber threats.

CCS is using dopplerVUE as their network monitoring platform to enable their SAM product (screenshot below). dopplerVUE’s unique architecture allows maximum flexibility to customize data collection and data display, making it uniquely suitable for CCS and other hardware and software manufacturers who need an uptime monitoring, diagnosis, cybersecurity and compliance component to their products.


















With predictions that there will be 215 million IP enabled devices by 2012, it’s clear that network monitoring will become a more critical activity for the mainstream. Seems like the world is becoming more network-centric one device at a time. Do you see the trend as well?

Friday, March 26, 2010

Comparing Solarwinds & dopplerVUE Layer 2 Switch Port Mapping…

When a customer calls with a problem, I often start the troubleshooting process by validating the connection. I start at the switch port and cable level. This process can be time consuming and challenging because you have to find the specific port a user or server is connected to on the network by checking the cable or manually connecting to a switch. Who has time for that lengthy process?


Fortunately, several tools now have layer 2 switch port mapping capabilities. I recently watched a demonstration of Solarwinds and dopplerVUE’s layer 2 switch port mapping capabilities and found them to be quite different despite the same name. The Solarwinds toolset provides very basic functionality. It identifies the switch port a device is connected to and it’s up/down status.

In a nutshell, here are four reasons to consider dopplerVUE over Solarwinds.

dopplerVUE’s layer 2 switch port mapping delivers:
  • the same and added functionality vs Solarwinds
  • complete mapping in 4 steps vs the 7 steps required by Solarwinds. That’s half the work and time.
  • a simpler process - there is no need to memorize the switch/router and community strings
  • a view of what is connected with how it is performing in a single dashboard (screenshot below)
The dopplerVUE screenshot below displays a complete device view including the switch port it is connected to, it’s current up/down status, the amount of traffic over the interface and any alert conditions. If you want to view a switch and a list of every connected item, simply choose the switch name instead of the end target device name.

















I’ve provided a comparison of the steps and process to use each product below. Feel free to check out the steps for yourself by downloading a free trial of dopplerVUE.


Friday, March 19, 2010

The Right Network Dashboard Can Make All the Difference

I’ve found that dashboards can be a huge help when managing the network, so I wanted to share some thoughts on the topic. A picture can be worth a thousand words in my experience. The dashboard I use (see below) makes it easy at a glance to see the status of the network and drill down into device details and troubleshoot network problems.

Of course setting up a dashboard that really provides value can be a time consuming process. To make the process easier, I’ve found it helpful to decide up front what elements are most important to display and then create the dashboard. Another key step is picking the right tool to create your dashboards. Three key questions can assist you in picking the right tool and avoiding confusion and disappointment later in the process.

1. How easy is it to create a dashboard? (Does it require importing or code?)
2. Can it mash-up network topology maps, performance data and alarm data?
3. Does the drill down capability provide for rapid jumping to detailed device/alarm data?

So why are dashboards so valuable? I’m able to display the information that helps me manage the network more effectively. For example, the screenshot of my dashboard below shows a distributed site. When managing across a WAN, I add elements into my dashboard that include where the alerts are, what type of network services are potentially impacted and the overall health and status of the network (bandwidth, router CPU load, etc). My dashboard shows all of these elements, and also goes a few steps further by including a network topology map, the top alarmed devices and customized pieces that show response time and availability of a specific website.

Depending on your specific situation, the dashboard capabilities can extend across multiple systems or take a deep dive into a single service provided by an IT organization. For example, an application dashboard would have a blend of device status, applications status, application services, and status of key dependencies all on a single page.

The dashboard that I’ve included was created with dopplerVUE my network management system of choice. If you’re interested in easily creating a custom dashboard give it a try by downloading the 30-day free trial.

Tuesday, March 16, 2010

Have you heard the Buzz?

Lately, many Gmail users have noticed a new option right below their Inbox that says “Buzz”. If you’re like me, you probably read the quick Google summary and then ignored it while you took care of your email. I finally decided to investigate Buzz a few days ago, and after hearing some colleagues expressing confusion about this new feature, thought I would address a few questions I’ve heard over and over:

What is it?
Google Buzz is similar to an RSS feed, except that it integrates all of your social networking data into one area, while also serving as a messaging tool. Buzz users can share things such as status updates, comments, video and pictures, which makes it fairly similar to Facebook and Twitter. The difference? Instead of going to different apps or sites to check all of your profiles, you can combine it all in Buzz.

Why do I need it?
Those of you that use social networking sites and like integration and consolidation will get the most use out of it. If you’re already using Gmail, Buzz is right there for your convenience. It’s also handy for those who love mobile apps. However, you must have an iPhone/iPod Touch, Windows Mobile, Android 2.0, Openwave or S60, for Buzz to work on your mobile device.

Side note- For those thinking of using Buzz with a supported mobile device, it can integrate your posts with Google Maps so that you can see your location and others around you using Buzz (similar to Google Latitude).

Are there privacy features?
While this is a question I’m still trying to gather all the details on, Buzz didn’t have stringent privacy features at first. However, after Google was threatened with multiple lawsuits in February, they added a few privacy features. There is an option to make your Google profile private, so that you don’t show up in directories and/or searches. You can also choose the sites that you want to link with for your contacts/followers to see, so you control the information you share. Choosing contacts is also completely in your control, and while Google will suggest people from your Gmail contact list, you have the option to add, ignore or even block them. As added value, you can also view who’s following you and choose how you want to react (add, ignore or block), so you can keep tabs on who can see your information.

What type of stuff can I integrate?
Sites currently integrated: Twitter, YouTube, Flickr, Picasa, Google Reader and Blogger.

Can Buzz benefit an IT or Network Manager?
Sure, if you use Gmail (or want to start) and use any of the integrated sites to communicate with others for IT or networking purposes. For instance, if you notice you’re having bandwidth problems, you could Twitter about it to notify your coworkers or get advice from your followers and also upload a screenshot of your network stats to Flickr so everyone can see details. Both of those examples can be done directly from within Buzz- no need to go to the Twitter website and Flickr website separately. You could also then use Buzz to tap into your Google Reader to look for tips to help increase your network speed or manage your bandwidth more efficiently.

For those of you that love to try out new tools and software, Google Buzz is an efficient venue for sharing information and opinions quickly. Again, being able to do the majority of your sharing within one area (Buzz) is the key.

Since Google Buzz is still pretty new, the adoption rate isn’t as high as Facebook, Twitter or any other mediums of the same nature. However, as Google tweaks it and integrates with additional sites, I can see how convenient it may turn out to be. In an age of information overload, consolidation can really cure a headache!

Is anyone currently using Google Buzz? What do you think of the overall user experience? Any tips to add?

Friday, March 12, 2010

The iPad – The Network Manager’s Friend or Foe?

So I’ve heard all the buzz about the iPad - pretty much impossible to avoid unless you live in a cave. Other than the unfortunate name, I’m intrigued by the technology and its potential to make managing a network a bit easier. I haven’t bought one yet, since I’m still determining, if it’s a network manager’s friend or foe at the moment.


On the plus side, from what I’ve read due to its size, it is much better than carrying around a larger laptop to monitor, troubleshoot or configure network devices. You could easily load ebooks to help you with troubleshooting in real-time. It definitely beats heading over to the office desk to grab a book for reference.

On the negative side, the iPad could easily become a security concern on the wireless network without antivirus and firewall protection. Connecting a device that is geared for personal use into the network could have some serious consequences. The usefulness of the iPad will also be dependent on the applications that are developed to help monitor the network.

Sounds like there is still some development work to be done from a security standpoint before the iPad is ready for prime time for network managers. That being said, as the technology matures and security concerns are resolved it could become a very helpful tool. Are you using the iPad? What do you think?

Wednesday, March 3, 2010

Avoid Traffic Headaches on the Road and in your Network

Traffic congestion on the way to work is a sure way to get an immediate headache. That is why I’m a big fan of viewing live traffic patterns from my smart phone. I get a live view of traffic that shows which routes are congested and clear. With this information, I arrive at the office much faster and in a better state of mind (my co-workers agree).

Wouldn’t it be nice if finding congestion in network traffic was as simple as flipping on your smartphone and pressing a couple of buttons? Maybe someday. In the mean time, to make life as simple as possible, I use dopplerVUE which has Netflow built in, so I can look deep into routers and capture rich details about what types of traffic, which IPs are talking and how much bandwidth is being used. Take a look at dopplerVUE in action below. You can try it out free for 30 days.















If you don’t have access to tools like dopplerVUE, there are free tools that can help you as long as you’re willing to invest the time.


There are basically two types of techniques to monitor congestion - packet monitoring and packet capturing. I’ve listed some free tools for both methods below.

Packet Monitoring
Packet monitors watch the number of packets whizzing by and tell you a little bit of information about them, such as the number of packets and if there are any errors in the packet. But that is about it, you don’t get much more detail. So this method is good for watching long term trends.


1) For Windows users, look at the network interface properties. The display shows you packets sent and received. This is an easy way to see if your interface is working.

























2) The Windows command line provides a number of useful tools to determine the performance of your TCP/IP connection. The Netstat command can give you details about each TCP connection including how many packets have been processed. Below is the result of a netstat –e command.

















A list of the most common communications related commands available for the Windows command line are listed below:



















Packet Capturing
Packet capture actually stores a copy of each packet that comes by which allows you to look at all characteristics of the packet. But all this detail comes with a down side - it will eat up storage space very quickly. So this method is best to capture a small sample of traffic for deep analysis.


1) For packet capture, the gold standard for open source tools is Wireshark. Here is a screenshot of a packet capture done with Wireshark on my laptop. As you can see, every packet is listed with full details about source and destination address, protocol type and data contents.











Wireshark is one of many open source tools that leverage the Winpcap tool for network monitoring. A list of tools that use Winpcap can be found here.


2) Windows server users have access to a similar tool called Network Monitor that helps monitor network traffic. Below is a screenshot of Network Monitor in action.















I hope these tools help you avoid congestion on your way to work and in your network.

Tuesday, February 23, 2010

Where are the best jobs in IT going to be in 2010?

Are network related jobs going to be hot in 2010? Looking for an answer to the question, I turned to Network World’s article on the 10 best IT jobs right now. With all the talk of security and virtualization, I was wondering if network related work would make the top 10.


The job of network engineer came in at number four on the list. According to Gartner, interest in networking, voice and data communications technologies increased for 2010, meaning skills in that high-tech area will also be in demand. With the need for social interactions and collaboration, network skills still remain hot.


I’m sure one of the reasons being a network engineer is one of the best jobs is because of all the great network management solutions that make life easier (I’m of course biased). Take for example, the solution I use – dopplerVUE. The software is installed and up and running in less than 30 minutes. It’s great - I get to start working on what I do best as soon as possible. Once installed, the package offers integrated fault, performance and auto-updating discovery across devices, apps, servers and services. Take a free test drive if you want to make your life a little easier.

Some other jobs that made the top 10 included security specialist, virtual systems manager, capacity manager, open source specialist, service assurance manager, electronic health records systems manager, sourcing specialist, service catalog manager and business process manager. Some of these jobs make sense considering the new technology trends, but I’ll admit some took me by surprise. What do you think of some of these jobs? Do you have any nominations for the best IT jobs?

Wednesday, February 10, 2010

Battling Blizzards

Back in the fall I posted about prepping your network for winter weather disasters, and it looks like those tips really came in handy for many this season. Today, the majority of the East Coast is battling a severe blizzard complete with 1-2ft of snow (4-8ft drifts!), 60mph winds with whiteout conditions, power outages and fallen trees. Each one of these factors pose a challenge to your business and your network. For my co-workers in our Washington DC offices, this is the third blizzard this winter, but they have managed to avoid communication and network failures. For those that may not have been hit with a weather disaster yet, you may want to take some time to review these tips to get a head start on future storms: 

Questions you should be able to answer:

1. Are you aware of your power situation?
a. What happens when a power outage occurs?
b. What is the operational status of the UPS system?
c. How long will the UPS backup systems sustain key functions?
d. What do we do if the outage is longer?

2. What if the building becomes unavailable? (fire or water damage)
a. Are the offsite backups current?
b. If a network device or server is ruined, what is the procedure to replace it?
c. Does everyone know the primary and secondary facility contacts to use should an after-hours emergency occur?

3. What if access to the building is limited? (snow, tornado warnings, etc)
a. Is VPN access updated for all employees that may need to work from home?
b. Can all of the required maintenance procedures be done remotely or skipped for several days? 


4. What if the phone and/or Internet connection is lost?

5. What is the customer impact when any of these conditions occur?
A few tips:

Advance planning is the best approach. A good network design can minimize the impact of storm and disaster related problems. Having redundant phone and data lines from different carriers minimizes the inbound/outbound traffic risk. Using an adequate number of UPS devices mitigates all but very lengthy power outages and network routing protocols like HSRP reduce the risk of single device point of failures.

Even monitoring your network with disaster prevention in mind can be helpful in avoiding unnecessary failures. These tips are a great starting point: 

1. Enable redundant polling of critical devices  
2. Map out HSRP primary and secondary links  
3. Know the status of the UPS systems  
4. Make sure you have 24x7 access to your management system client

And for fun (and to get the sympathy of those without 3 feet of snow on the ground), here’s what three blizzards in a row will do to you (and your network!):

Monday, February 8, 2010

Prepare for the New CCNP Tests with FREE Training Books, Videos and Cert Kits

Cisco Press will be giving away 50 copies of its new CCNP Cert Kits and other study guides to help you prepare for the revised CCNP certifications. The mega giveaway is being sponsored by Cisco Press on Network World’s Cisco Subnet – a community website.


The chances are really high that if you enter, you will win something. All you need to do is find some words that form a specific sentence in various chapters that are provided and enter the response. Not too much work for some free study materials. The contest ends March 31. Register to win one of 10 copies of the following titles:

CCNP Route Cert Kit (Read excerpt.)

CCNP Switch Cert Kit (Read excerpt.)

CCNP Tshoot Cert Kit (Read excerpt.)

CCNP Routing and Switching Official Certification Libraries

CCNP routing and Switching Quick Reference printed bundle


Good luck and hope you win!

Thursday, February 4, 2010

Improving Network Discovery by using SNMP OID Include/Excludes

An issue that frequently comes up for IT managers is the need to find only certain types of devices within a heterogeneous network that contains many types and manufacturers of networked devices. I recently worked with a customer that wanted to locate about a hundred Windows Servers from a network that contained several thousand devices.

One way to approach this task is to discover all the devices, then pick out the ones you are looking for, the old needle in the haystack routine. This approach is time consuming and error prone. A better method is to leverage the information available from devices that support the SNMP protocol, which includes most operating systems. SNMP includes an object library of OIDs (Object Identifiers) that are set up by each manufacturer. A Google search for “Windows OIDs” found this site which listed the OIDs to identify Microsoft Server Operating Systems.

As you can see (table below), the OIDs are built in a hierarchy so, if I could search my network for servers which contain the OIDs below for workstations, servers and domain controllers, I should find all my Windows Server boxes.








You can make the difficult task of finding and sorting networked devices much more manageable. I use dopplerVUE, a network management tool that simplifies the whole process and helps find the needle in the haystack faster and without issues.

dopplerVUE provides an OID include/exclude discovery feature that makes it easy to accomplish this task.

Here are some steps for using dopplerVUE to improve the network discovery process. To get started the server must have the SNMP agent service running and you need the credentials (called a community string) to enter in the SNMP service “security” tab. Most servers use “public” as a default and are case sensitive. SNMP service is usually turned off by default, so you’ll need to restart the service when you are done making changes.


Once you have the servers set up, you should create a discovery job within dopplerVUE to find the Windows Servers. dopplerVUE provides a discovery wizard that guides you through the step by step process as follows:


Step 1: Select a discovery method appropriate to the task. Use an IP address range that provides the most control over your discovery results.


Step 2: Set an IP range that includes the Windows Servers you are looking for in your search. Be careful, the larger the range you select, the longer it will take to complete the discovery.


Step 3: Select SNMP protocol.


Step 4: Enter the community strings for the servers. Your admin can provide these and you can always use public which is set as default on most servers.


Pictured below is a tab marked “Show sysObjectID include/exclude options”. You can click on the tab, expand the Window and then select “include”. You can then enter the OIDs we found earlier.
















Step 5: In the workstation column you’ll want to select SNMP poller and then Host MIB if you want to collect information about processor utilization, memory usage and disk space.



Step 6: Optional: Enter a name and description for this discovery job.


Now you can click finish and go to the Inventory>Discovery Jobs tab to watch the progress of the task. The job will start automatically assuming your dopplerVUE discovery service is running and you had the “run now” checkbox selected in step 6. If not, click on the job and start it.


You can watch the progress in the job details section and keep an eye on your inventory tab to see if new devices are being found. When new devices are found, they should appear in the workstation classification. You can change classifications or create new ones easily by right clicking on the objects in the workstation classification list.


This technique works for any search where you can separate the devices by manufacturer. Since each manufacturer determines how they want to build their SNMP library, you’ll need to understand how they created their hierarchy. Fortunately there is a lot of good information available on manufacturer websites to help you. Here is more information about SNMP support within Windows.


If you’re looking to improve network discovery and automate IT tasks to save time, try dopplerVUE for free for 30 days.

Friday, January 29, 2010

Network and Security Monitoring – What are the Key Challenges in 2010?

Want to wager a guess as to what the key issues are surrounding network and security monitoring in 2010? If you want to confirm or deny your suspicions, check out a study from Enterprise Management Associates (EMA) involving network and security operations professionals highlighting the challenges and best practices for optimizing monitoring in 2010. In case you don’t have time, here are some of the highlights:

1. 24% of participants reported they either lack the staff to keep up with monitoring tasks or the training within existing staff to keep up with administration or interpretation.

2. A trend spotted by 62% of participants was the movement of staff to more generalist roles reducing the availability of technical specialists (shrinking budgets have made this a real challenge). Does your experience support this trend?

3. 66% of participants indicated they lack enough monitoring tools or the budget to buy them.

4. 47% of respondents were not fully utilizing the monitoring tools they had in place.
It's clear that in 2010 workloads won’t be decreasing and that the trend of doing more with less will continue. Another finding is that IT staff need a tool that centralizes their monitoring into a single system that is simple to maintain and is easy to use.

If this rings true for you, I can provide some helpful guidance. My company offers dopplerVUE a powerful easy-to-use network management tool that can help you bring together security and performance management into a single system. dopplerVUE is a cost-effective ($10 per element) and proven solution (check out our tutorials). Or download the software and try if for yourself for free. We also offer great customer support to make sure you make the most out of your investment.

That’s my product pitch for the day. Enjoy the weekend!

Thursday, January 21, 2010

Routers in Space…Extending the Internet into the Universe???

Managing a network on Earth is no easy feat. How about in space? This may sound far fetched, but the concept may be becoming a reality. Internet technology is now being made available from a space-based platform.


Cisco is testing an IP router aboard a satellite in Earth orbit (22,300 miles above the Earth). The aim is to extend IP access to places that aren’t served by traditional phone and wireless networks.


As part of its Internet Routing in Space (IRIS) program, Cisco is testing the router to demonstrate to the Department of Defense (DoD) that the technology can be used to enhance military communications.


Here is a picture of the Intelsat 14 satellite with reflectors deployed for testing. Check out more pictures here.













According to Cisco, IRIS shifts much more of the intelligence to the orbiting router – with potentially dramatic benefits. The long-term goal is to route voice, data and video traffic between satellites over a single IP network in ways that are more efficient, flexible and cost effective than is possible over today's fragmented satellite communications networks.


After testing is completed in April, the IRIS project will be switched over for commercial use.

This is a very interesting development from my perspective. My company has been helping the government manage LAN, WAN, satellite and a range of networks for some time. Perhaps its time to take it to the next level….

Friday, January 15, 2010

Networks Could be 10,000 Times More Efficient…Really?????

Did you know the global network currently generates 300 million tons of carbon dioxide a year -- about as much as 15 million cars? I didn’t either until I came across an article on Market Watch. The number is increasing as Internet traffic continues to grow along with a worldwide user base. I was surprised by how much of an impact the network has on the environment. But, there is good news…


Global networks could theoretically run on 10,000 times less energy than they do today according to scientists and engineers at Bell Labs (the research arm of Alcatel-Lucent). The estimate came from the scientists when they decided to find out the minimum amount of power required to run the network. This isn’t my field of expertise, but it seems like a staggering amount of inefficiency.


Why are networks so energy inefficient? Bell Labs says that networks weren’t designed with energy efficiency in mind, but were optimized for performance and simplicity (not too surprising).


So where does this leave us? Alcatel-Lucent and Bell Labs have decided to launch a global consortium called Green Touch whose goal would be to develop the technologies needed to make networks much more efficient.


How much more? 1,000 times more efficient than it is now within five years. It's an aggressive goal, considering that a thousandfold reduction is roughly equivalent to being able to power the network for three years with as much energy as it currently takes to run it for a day.


This is great news. It’s going to be interesting to see if the consortium can deliver on their goals. I’m cautiously optimistic – how about you? If you want more details check out the Green Touch press release.

Wednesday, January 13, 2010

How Much is Downtime Costing You? Find out How to Reduce Downtime by 85%...

Any guesses on how much IT downtime is costing your organization? Maybe you already know. In case you don’t - research from analyst groups suggest the cost is anywhere from $42,000 to $90,000 for every hour of unplanned downtime. The estimates (of course) vary greatly based on industry, organization size and other variables, but even if you’re on the low end of that estimate, that’s a lot of money.


In a recent white paper “Business Operations Disruption Risk: Identify, Measure, Reduce”, IDC highlights the application of five best practices that can help reduce unplanned downtime by up to 85% (derived from interviews with multiple midsize companies).


• Consistent use of management software reduces network and system downtime by 65%
• Upgrading servers/storage/network equipment reduces downtime by 50%
• Enabling high-availability failover clustering software reduces downtime by 43%
• Adopting industry best practices standards (e.g., ITIL, Cobit) across the organization reduces downtime by 13-15%
• Using virtualization software reduces server downtime by 10%


I’m guessing you’re already using some or a combination of these tactics to mitigate the risk of downtime. Most of these best practices are well known, although I did find the percentages associated with each one interesting. Do you find them to be accurate based on your experience?


Anyway…if you’re looking for guidance in any of these areas, let me know. My company specializes in IT services and network management software (free trial for 30 days).

Wednesday, January 6, 2010

Network Management Security

Happy New Year! I recently read a great article on network management security by Joel Snyder, and thought it would make a nice first topic for 2010. His article offers up several very good tips that you should consider implementing if you haven't done so already. Check it out:
 
It doesn't happen very often, but when Cisco sends out a security advisory about one of their routing or security products, there's a big splash. Almost all of these advisories can be summarized like this: "If someone out on the Internet sends some bad packets to your Cisco device, and if your device is listening to them, then something bad will happen." 

The phrase in that alert you need to pay attention to? "If your device is listening to them." 

It shouldn't be. 

Do you have SNMP enabled on edge devices? Fine… so long as you also have an access list saying that those SNMP packets can only come from your management station. Is the management interface, whether HTTP, HTTPS, SSH or (heaven forbid) Telnet running? 

Fine … so long as no one outside our network can ever get there. 

We call this the "control plane" or "management plane." Think of it as a different network that runs in parallel to your data network, and is used to control, monitor and manage the data network. In huge networks, there is a true network control plane that is completely separate from the data that the device sees. But in many smaller networks, control plane, management plane, and data plane run on the same wire.

You can, and should, secure your network control plane in many ways, but they mostly come down to two techniques: access control lists and self-protection.

ACCESS CONTROL LISTS MANAGE TRAFFIC TO EDGE DEVICES 
Access control list protections usually occur when you put a block of some sort in non-firewall devices at the edge and core of your network. A good example is SNMP. Let's say you have an SNMP management station at 10.20.30.161. That represents the one valid flow to and from that management station to network and security devices. Now, any other SNMP traffic floating around on your network, or coming in from the edge, should be blocked. If you have intermediate routers in your network, and certainly if you have firewalls, you should use them to block SNMP traffic -- and any other management traffic -- to your security and network devices, except from authorized sources.


You can get as strict as you want. For example, you can simply block all SNMP anywhere in your network except to and from the official management station. Here's an example using Cisco Systems Inc. access list syntax (once you define these access lists, don't forget to apply them to the appropriate interfaces):

permit udp 10.20.30.161 any eq snmp
permit udp any 10.20.30.161 eq snmptrap
deny udp any any eq snmp log
deny udp any any eq snmptrap log


Or you could put a block in to just protect the network and security devices. Usually, stricter is better, but if you don't know who else might be using SNMP, then you can focus on the devices that run your network.

At the edge, a much stricter approach is appropriate. In this case, you should be blocking all traffic directed at your firewalls, load balancers, and routers on their management addresses. Remember: No one on the Internet needs to send packets to your firewall, or to your external router. They legitimately send packets through those devices all the time, but the packets are never destined (at the IP layer, anyway) directly to the device. They're always for some IP address behind the device. The only time you may want to consider letting traffic come directly to the management IP of your external security and network devices is for PING traffic -- it's a very useful debugging tool and worth letting traffic come in.


Here's an example, using Cisco syntax, of blocking access to a device 128.182.35.42: 

deny IP any host 128.182.35.42

If you wanted to block all SNMP incoming, you could do something like this:

remark *** Deny all other SNMP incoming
deny udp any any eq snmp
deny udp any any eq snmptrap


If you're in a NAT environment and you're using the external IP address of your firewall or router both for management and NAT, here is some advice: Don't do that. You're asking for security trouble, because now you have the same IP address being used for two things. IP addresses may be in short supply, but they're not in that short supply. Here's an example in case you can't separate out NAT from other traffic, assuming you know which ports your router or firewall are listening to (not a very good assumption, as the Cisco advisories show):

remark *** Block obvious access to mgmt plane; allow others
deny tcp any host 128.182.35.42 eq 22
deny tcp any host 128.182.35.42 eq www
deny tcp any host 128.182.35.42 eq 443
permit ip any host 128.182.35.42


CONFIGURE SECURITY DEVICES TO PARTICULAR TRAFFIC
Another protective technique should be self protection. With self protection, you configure the network or security device so that it doesn't listen to traffic it shouldn't hear. On devices such as routers, you'll want to create a local access list that only allows management traffic from authorized management networks. If you can, also disable management protocols and interfaces you aren't using. On devices such as firewalls, there is often a series of check boxes that let you turn on or off management on certain interfaces. It doesn't need to be enabled on the outside, ever. That's what VPNs are for, if you really need external management.


Sometimes you want to disable protocols entirely. Most people, for example, do not manage Cisco routers using HTTP. Here's a configuration example that's double overkill: disabling the HTTP server, and then also putting an access list on it, just in case.

no ip http server
ip http access-class 21
ip http authentication local
no ip http secure-server
access-list 21 deny any


And even if you do have management enabled, you should also add lists of authorized management addresses. It shouldn't be possible for someone who happens to be inside your network to connect to the management IP of your firewalls, routers, or other security devices, unless they're on the official management network.

For example, again using Cisco syntax, here is what the SNMP part of the router configuration might look like in a self-protective mode of operation:

snmp-server community public RO 6
snmp-server community vewysekwitpassword RW 6
snmp-server location Opus One/Tucson, Arizona
access-list 6 permit 203.209.92.105
access-list 6 permit 192.245.12.0 0.0.0.255
access-list 6 deny any log


In summary…
1. Your public facing devices should not have non-essential ports open. 
2. SNMP is safe when using an access list to limit who your routers will send/receive SNMP traffic to/from. 
3. Use a Management or Control plane to isolate the monitoring and management tasks from the other traffic. This allows you to open ports/protocols often necessary to enable the best functionality.

Do you have any network management security tips that you frequently use or follow?