Saturday, 30 November 2013

What are RadioTap Headers?

I've been doing some study for my CWAP (wireless analysis) exam recently, so I've been spending quite some time staring at Wirehsark traces trying to figure out precisely what all of those 802.11 fields actually mean.

One thing I noticed whilst pouring over a few capture files is that some of them seemed to have some additional fields included in the trace, which seem to have nothing to do with fields defined in 802.11 frames at all. They are in a section of the packet decode called 'RadioTap Headers'. I wasn't too sure what they were and why they are available in some captures, whilst in others they were missing.

After a little bit of research, I found out a bit more information and thought it might be worth sharing in a quick blog post.

In summary, radiotap headers provide additional information that is added to each 802.11 frame when capturing frames with an analysis application. Just to be clear, these are not part of the standard 802.11 frame format, but are additional information added at the time of capture to provide supplementary data about the frames captured. For instance, in a standard 802.11 traffic capture, there is no information regarding the receive signal level for the frame at the time of capture, which could be very useful. As another example, there is no information about which channel is being used by a station that has generated the frame, which again could be very useful.

Radiotap headers provide additional information to supplement the raw frame capture data that can be derived by analyzing the 802.11 frames.

The next logical question is: "how to I get radiotap headers in my captures?". The headers are added by the wireless adapter (or its driver) that is being used to perform the frame capture. If the adapter does not inject the additional information as it captures frames, then no radiotap headers will be added. 

I guess the best way to verify this is simply to perform a capture and see if they appear in your capture using Wireshark. In my own particular case, I was performing a capture with an AirPcap NX card, which provides radiotap information. However, I also performed captures with the internal WLAN NIC of my laptop, which does not provide radiotap data.

Here are a couple of examples so that you can see the difference between captures with and without radiotap headers:

Fig 1 - Beacon frame, no radiotap data

The capture above shows a standard capture of a beacon frame with no radio tap headers (taken with my laptop WLAN NIC card). Next, we see a beacon frame again, but this time with radiotap information included (taken with the AirPcap card):

Fig 2 - Beacon frame, with radiotap data


You can see the additional radiotap section in the frame decode highlighted with a red circle above.

Next, we'll snap open the radio tap headers and take a look at the information available:

Fig 3 - Radiotap headers detail


Right away, you can see fields which give us great supplementary information about the RF environment that the capture was taken in. Looking at the trace above, we can see that this frame was captured on channel 6 on the 2.4GHz band and that it was received by the wireless NIC capturing the frame at a level of -44dBm.

You can see that there are also additional snap-open sections to view even more information (i.e. Present flags, Flags and Channel type), so we'll take a brief look at each of those too.

Present Flags


Fig 4 - Radiotap 'Present Flags' section

The 'Present flags' section is a matrix of the information that is available in subsequent sections of the radiotap headers. The flags will vary depending on the information that can be provided by the NIC card that is performing the capture. (To see a comprehensive list of all possible fields and their meaning, take a look at the following page: http://www.radiotap.org/defined-fields )

For each field that is marked as 'true', there will be information for that section appearing in the radiotap headers that follow the 'Present Flags' matrix. For instance, in the matrix shown above we see that the flags for 'Channel' and 'dBm Antenna Signal' are true. We've already seen in a previous screen-shot that these field are included in the headers shown in the frame decode.

Flags

Fig 5 - Radiotap 'Flags' section


The flags section provides us even more information about the captured frame itself, giving details about preamble type, whether a short guard interval was used and whether the frame has a valid frame checksum.

Channel Type

Fig 6 - Radiotap 'Channel Type' section

In the 'Channel Type' section, we gain even more information about characteristics of the channel itself, such as which band is in use, the modulation type used and channel width information.

More Information

The examples shown here represent a subset of the fields that may be included in radiotap headers. To view all possible fields, and to understand their meaning, take a look at the following page: http://www.radiotap.org/defined-fields

To find out more about radiotap headers, take a look at http://www.radiotap.org which has comprehensive information on all apsects of radiotap headers.

Sunday, 10 November 2013

Antenna Radiation Patterns in the Real World

I was recently reading through the most recent edition the finest WiFi text book in the world (in my opinion): the CWNA study guide. I read the previous versions a couple of times when I took my original CWNA exam and subsequent renewals.

Looking through the latest book, I've picked up a a few nuggets which I either missed, or weren't included in the previous versions that I read. I had one "light bulb" moment when reading about antenna radiation patterns. 

If you've ever looked at datasheets for antennas or access points, you may well have come across diagrams (like those shown below) that show the radiation pattern of an antenna.

Fig1 - Antenna Azimuth Chart

Fig2 - Antenna Elevation Chart

There are generally two types of radiation pattern shown for each antenna:
  • Azimuth (i.e. the RF radiation pattern when viewed from above the antenna)
  • Elevation (i.e. a side-on view of the antenna RF radiation pattern)
These are useful to understand the characteristics of the antenna, showing how directional the antenna may be, together with the presence of side-lobes, back-lobes etc. They provide a guide to how useful a particular antenna may be for a particular RF coverage requirement.

If you look at the examples shown above (which show a semi-directional patch antenna), you can see that the radiation tends to be mainly on one side of the antenna, giving it directional characteristics.

You may also note that in the charts shown, a number of concentric circles originate from the center of the chart, showing a number of dB (decibel) levels. These show the loss or gain provided by the antenna for a 360 degree sweep around the antenna. The intervals will vary from chart to chart. Typical intervals are 10dB, 5dB and 3dB - but be sure to understand the interval used.

Antennas provide varying levels of gain (a positive dB value) or loss (a negative dB value) as measurements are taken around the antenna. The physical characteristics of the antenna will determine what levels of gain or loss may be found around the antenna (and hence how directional it may be).

It is worth noting that if the outer power level of a radiation chart shows a value of '0dB', then the chart has been 'normalized'. This means that the actual gain of the antenna itself has been removed from the readings shown. This allows a more granular view of all levels on the chart. But, it needs to be factored back in when performing any calculations that use the chart for reference. 

The key point to note here is that the charts show the level of gain or loss provided by an antenna in Decibels (dB). Decibels are essentially a logarithmic measurement of the gain or loss that the antenna provides. Without getting too deeply in to the mathematics of  dB measurements (I'll leave you to read the CWNA study guide for that), the radiation patterns show gain or loss from a logarithmic point of view, rather than a (real-world) linear perspective.

As an example of this, a common rule of thumb when considering RF coverage is that an increase of 6dB in gain will actually double the distance that an RF signal will cover. Each 3dB increase in gain equates to a doubling of signal level. Therefore, an increase of 6dB (two x 3dB increases) actually quadruples the signal level. But, as previously stated, this 6dB gain leads to a doubling of the distance covered (for a 4 x signal level increase). There is evidently not a linear relationship between power levels expressed in dB and the power levels experienced in the real world.

Apologies if this seems a little over-complicated, but the point is that the radiation patterns shown in antenna data-sheets do not represent the actual radiation pattern you will see in the real world. 

Logarithmic scales are used for antenna radiation charts as they provide an opportunity to cram quite a bit of information on to a chart. Granular information can be maintained for a greater range of power levels by using a logarithmic dB scale - this would not be the case if linear scales were used. But, the scales do not represent what you will see in the real 'linear' world. If you interpret them as a literal representation of the coverage performance of an antenna, you might be a little disappointed.

I found this to be a fascinating point that I had not really considered or understood before. 

I set out to find a way to visualize what radiation patterns might look like in the real world.

This would require a bit of manipulation of the equation for 'Free Space Path Loss' (FSPL), which is no mean feat for someone as mathematically challenged as myself:

Fig3 - Free Space Path Loss Calculation 

The FSPL equation is used to show how signal levels will reduce as they travel through space away from a transmitter. It uses the variables of frequency and distance to show the loss applied to a signal as it travels through space. 

Using the FSPL formula we should be able to calculate the actual distance at which a particular signal level occurs. For instance, if am able to supply the variables for a signal frequency (i.e. 2.4GHz or 5GHz) and the gain of the antenna, I should be able to calculate the distance at which a signal level occurs for a  particular transmit power applied to an antenna. 

In simpler terms, if I know that I am applying a 1mw (0dBm) signal to an antenna, I am operating at 2.4GHz and I want to know where the -65dBm boundary of my signal falls (in terms of distance), I should be able to calculate it using the FSPL equation.

Whilst scratching around in Excel, I found a type of chart I had never seen before: a 'Radar' chart, which is very similar to the polar charts used for antenna radiation patterns.

After much head-scratching and experimentation, I came up with a formula that would calculate the projected distance from an antenna for a particular signal level, taking account of the signal frequency and the antenna gain. I then took a sample of points from an antenna polar chart and typed them in to Excel as a data series. By applying my formula I was able to create a chart showing the radiation pattern of an antenna that shows actual distances rather than logarithmic loss/gain values.

The results were fascinating. 

As shown in the CWNA study guide, when looking at the true linear values of actual coverage, the true extent of the directional nature of antennas becomes apparent. 

It was fascinating to see how much flatter the elevation pattern of a dipole antenna actually is, with side lobes travelling significant distances compared to area above and below the antenna. Also, the directional nature of patch antennas is far more apparent than is suggested simply by looking at manufacturer-supplied polar charts.

The charts created using the Excel Radar charts are not as granular as the original polar charts, due to the smaller number of sample points that it is possible to input. But, it demonstrates the real-world effect quite nicely. If you wanted to reproduce the full data from the original chart, you'd need to spend a lot of time taking measurements and adding them in to the Excel data series. Realistically, the examples I have shown have little real-world design value, as they miss a lot of important data due to their lack of granularity.

Here are a few samples which I put together:

Cisco AIR-ANT2460P-R

In this example I looked at a Cisco AIR-ANT2460P-R, which is a (2.4GHz) 6dBi patch antenna. 

Below, you can see the azimuth polar chart for the antenna from Cisco. Below the Cisco chart, you can see the calculated, real-world signal coverage for a -65dBm cell edge that the antenna might provide for a 1mW (0dBm) signal applied to the antenna. It is interesting to see the distance that this covers, but it is even more interesting to notice how much narrower the width of the coverage area is compared to what you might expect looking at the original polar chart.

Fig 4 - Cisco AIR-ANT2460P-R Azimuth Chart

Fig 5 - Cisco AIR-ANT2460P-R  Coverage (1mW, -65dBm cell edge)

Cisco AIR-ANT1728

In this second example, I looked at the Cisco AIR-ANT1728 (2.4GHz, 5.2dBi) dipole antenna. 

Again, I assumed a 1mW (0dBm) input to the antenna and looked to see what the -65dBm cell edge pattern might look like in the real world. Again, the difference between the pattern observed on the manufacturer-supplied polar chart compared to the real-world radiation pattern is striking. It's very interesting to see how much flatter the pattern is, with the side lobes extending much further in comparison to those shown on the original chart. 

One aspect of using high-gain dipoles that has always been an area of concern for me is exactly how high you could hang a dipole in a warehouse before you might start to run in to issues around the lobes below the  antenna not reaching the floor. Looking at this, you start to get some idea of the distance involved. Obviously, at higher powers this is less of an issue and of course here we are only considering direct line-of-sight paths, whereas in reality signals may arrive via other paths (e.g. reflections).
Fig 6 - Cisco AIR-ANT1728 Elevation Chart


Fig 7 - Cisco AIR-ANT1728  Coverage (1mW, -65dBm cell edge)

Conclusions

The examples and data I have presented realistically have little value in real-world design, but they emphasize the point that antenna polar charts need to be interpreted with a trained eye to anticipate the actual coverage that they might provide. They give an indication of the true nature of the real-world radiation pattern that they represent.

But, it must be remembered that in the real-world there are whole host of other propagation factors which come in to play (e.g. reflections, refraction, multipath, beam-forming etc.). They also take no account of the effect of a receiving station's capabilities. In addition, there are effects from cable losses and connector losses between the AP and antenna which would also affect the real world coverage that is achieved. The only way to gain a real-world view the true coverage provided is, of course, with a wireless survey.

Even so, it is an interesting exercise in understanding the linear nature of coverage that is not apparent from the manufacturer data sheets.

I've generated a number of these charts using Excel. The original sheets are available for you to download and have a play with yourself. 

Have fun!

References



Tuesday, 5 November 2013

Defaulting Cisco LWAPP/CAPWAP APs When You Have No Login Credentials

Occasionally you may come across an instance where you need to reset a Cisco 'lightweight' AP to it's default configuration. However, if the AP is not associated to a controller and you do not know the local username/password of the AP, then this can be something of a challenge.

In summary, here are the steps to default the AP when you cannot get in to the AP via the 'usual' methods:

  • Put a console cable in to the AP and fire up your terminal emulation program
  • Power up the AP with the reset button pressed at the same time
  • Release the reset button after 15 - 20 secs
  • On the console, you should now be dropped in to a 'ap:' prompt.
  • Type in the following command to see the files on the AP: 'dir flash:'
  • One of the files listed should be 'private-multiple-fs'
  • Enter the following command to remove the configuration: delete flash:private-multiple-fs
  • Reboot the AP - you will be able to login to the AP using the usual defaults (i.e. enable/Cisco)

(Note: this cannot be performed remotely, you must be able to physically access the AP)

Sunday, 22 September 2013

Devin was right...?

In the WiFi industry, there are fewer characters who are more polarizing than Devin Akin (@DevinAkin). I guess he is the 'Marmite' of the WiFi industry: you will generally be a huge fan, or maybe not so much :)

I personally have always been a huge fan of the work he did when he was part of the CWNP organization - I would not be in the position I am now without the fantastic work that Devin and the guys over at CWNP have done in providing vendor-neutral WiFi certifications.

However, back at the beginning of 2012, Devin had moved to Aerohive and was presenting at the WiFi Symposium, which was part of the Wireless Field Day 2 event. I watched all of the videos from that event and learned some very valuable information. However, Devin's presentation about Aerohive's architecture (which you can see at the bottom of this article), and his belief that in the future other vendors must move in a similar direction, was a 'light-bulb' moment for me. I had only been involved with controller-based solutions to that point and was fascinated with this 'new' approach. I was even more fascinated by his predictions around how the WiFi industry would evolve around cloud and controller-less solutions.

I watched this video again this morning, nearly 2 years later, and realized that in many areas he had pretty much nailed it. Although the industry isn't quite there yet, most vendors have moved, or are moving, in the directions he predicted.In retrospect, whether you're a Marmite fan or not, it's pretty impressive.

I'm hoping to be meeting up with Devin in the near future, following his move to AirTight networks. You can be sure I'll be paying very close attention to what he has to say...


Sunday, 25 August 2013

Configuring DHCP Option 226 on a Cisco Router/Switch for an Aerohive AP

There are a number of methods of directing an Aerohive AP to find its instance of HiveManager, including using a DHCP option.

In this quick tip, I share how to set up a Cisco switch or router DHCP server range to provide the correct  DHCP option to direct an Aerohive AP to a local instance of HiveManager. This is useful if you have a copy of HM running on your own appliance or virtual server.

APs may be passed the IP address of HM via DHCP option 226. In the example below, APs are assigned addresses in the range 192.168.20.0/24. The Hive Manager server may be found at 192.168.50.7 in this example.

The default router and DNS server options will need to be set to match your own environment.

!
! DHCP range for Aerohoive APs
! (HM address passed to AP using option 226)
!
ip dhcp pool APs
   network 192.168.20.0 255.255.255.0
   default-router 192.168.20.254
   dns-server 8.8.4.4
   option 226 ip 192.168.50.7

Hopefully, this is all fairly self-explanatory if you are familiar with IOS :)

Saturday, 24 August 2013

The Missing Feature in WiFi Solutions: Performance Testing

In this article I suggest a nice feature that would provide a useful WiFi vendor differentiation and a valuable tool for administrators of WiFi networks.

As a WiFi network value added re-seller (VAR) I visit a lot of customers, interacting mainly with the poor, down-trodden folks who comprise the IT department of an organisation. They are generally responsible for fending off the daily barrage of complaints about "the network".

In general, they are mainly concerned about two factors when it comes to their wireless network: coverage and performance. There are many other factors that they should probably be concerned about, but these are the two factors that tend to keep users off their back if they are both taken care of.

Verifying WiFi coverage for an average IT administrator is generally very simple. They simply do a Google search download a tool such as Metageek's inSSIDer and visit the area where users are complaining. Even if they don't manage to use dedicated tool such as inSSIDer, they will generally check the 'number of bars' displayed on a wireless client to get an idea of signal strength in an area. They will very quickly be able to determine if they have a coverage issue by verifying signal levels around a particular area.(Note: this isn't the only factor to consider, there may be other issues such as non-WiFi interference from security cameras, microwave ovens etc., but that's a discussion for another day). 

However, once coverage has been verified, then comes the more challenging issue of testing performance over the wireless network. The number one way I tend to see end users testing their WiFi network (especially on tablet or smartphone devices) is to fire up an app such as SpeedTest.net and verify the throughput to the Internet. Whilst this approach is easy to get a quick assessment of how throughout might look, there are a lot of holes in this approach, not least because it relies on both the bandwidth and current utilization of your Internet pipe. In many organizations, the results vary based on time of day (i.e. number of users on the Internet). It also provides no differentiation between the various components that the test data may traverse (i.e. wireless, wired and Internet).

There are several options that could be deployed by IT administrators, such as setting up an iPerf server somewhere on the network. They could then grab a copy of iPerf off the iTunes store or the Play store (Android) to do some throughput testing. However, I generally never see anyone using this approach, (I'm guessing as most folks don't have the time, knowledge or inclination to do it...).

For me, the answer is simple: WiFi vendors should build performance testing features in to their products.

Here are some high-level thoughts on how this might look: 
  • A server process along the lines of iPerf built in to each AP or controller
  • A reserved IP address (nominated at install-time) on each SSID (or even a hidden test SSID) for testing
  • A wireless-vendor created user app that could be downloaded from your favorite app store. This would allow IT admins to test the WiFi performance using various presets (e.g. voice/data etc.), taking away the complexity of the myriad of confusing options in tools like iPerf
Given the current processing power of the hardware of current WiFi solutions, how hard could this be!?!?

The benefits for IT admins of a feature of this type are enormous. It would certainly help to prove whether or not "its the wireless network", or whether the perhaps they need to look elsewhere. An approach like this would be a nice differentiator for a wireless vendor, as I'm not aware of anyone providing this kind of function integrated in to their solution (let me know if I'm wrong about this).

The marketing benefits of an app like this for a vendor are hopefully self-evident (brand awareness, targeted ads/messages). I suspect that the screen real-estate for WiFi testing tool vendors would be very appealing too - imagine the attraction of advertising to a user who is currently suffering a WiFi issue!

In addition, this type of functionality could be very useful to those of us perform wireless surveys,perhaps providing an easy method of  performing active surveys if an iPerf server was available as an integral function of WiFi infrastructure kit.

Well, that's my ten cents/pennies worth (depending on where you live). It would be nice to see a vendor implement something along these lines...anyone..? "Bueller, Bueller, Bueller....."

Update 25/08/13: Since posting this, I had some feedback on twitter from various folks (@revolutionwifi, @wifikiwi, @dszp) about some vendor products that have iPerf  (or similar test features).

I believe that Aerohive has iPerf available from the CLI on some of its products, Ruckus has the 'Zap' utility and Aruba has some form of client test utility. I don't get to use Ruckus or Aruba, but I'll be having a closer look at the Aerohive iPerf server.

Looking on the Apple app store, the Ruckcus 'Zapper' app looks very promising and is probably well worth checking out if you have a Ruckus network.

Despite these few items, the landscape for client testing across many vendors is still not exactly 'great' and these is a lot of work to do to enable end-users to easily evaluate their WiFi network performance.

Friday, 16 August 2013

How Much Air-Time Do Beacons Actually Burn?

It’s a well known rule of thumb when designing WiFi networks that you need to try to keep the number of SSIDs broadcast by your wireless network  down to a ‘reasonable’ number. In this article, I take a look at how much of an issue SSIDs (and their beacons) are in consuming valuable wireless air-time.

Generally, it’s recommended to keep the number of SSIDs below around 5 (ish).

The reason for keeping the number of SSIDs to a minimum is that each SSID is advertised using a type of management frame called a ‘beacon’.  Beacons are generally sent 10 times per second for each SSID on the wireless network. Therefore, if you have 10 SSIDs, they will each be advertised 10 times per second, giving us 100 beacons per second.

Air-time is a finite resource – there is only so much data that can be transferred across the air over a period of one second. If a large chunk of air-time is being consumed by SSID beacons, then that doesn’t leave a whole lot of time remaining for actual user data to travel over the air (which is the whole point of having a wireless network!).

I have previously heard statements from various wireless engineers along the lines of up to 50% of available air-time being consumed by beacons once you have 6 or 7 SSIDs being broadcast by a network. I’ve taken this information on face-value and never really thought too much about it.

However, this evening I found myself in a hotel room with some time on my hands, a Cisco WLC, a Cisco AP and a copy of Metageek Eye PA. I thought it was time to test the ‘conventional wisdom’.

My approach was simple: I would set up my AP on channel 11 (2.4GHz) and capture all frames using Eye PA. I would vary the number of SSIDs being broadcast and monitor the results.

I would also vary the lowest mandatory speed supported by the 2.4GHz network between 1Mbps, 11Mbps and 54Mbps. Beacons are sent at the lowest mandatory speed that is configured for a wireless network. Therefore, if 1Mbps is the lowest mandatory speed, beacons are sent at 1Mbps (and hence are a lot slower and consume more air-time)

To determine how much actual air-time is being consumed by beacons, I would use Eye PA’s filtering capabilities to remove all frames except beacon frames, and remove any other local interfering SSID traffic (i.e. the pesky hotel WiFi on the same channel!). This would leave me with just the beacon frames from my AP:

Eye PA Filtering Beacon Frame


Eye PA allows you to select a period of one second of the filtered traffic that you have captured, and also shows the amount of air-time those frames consumed in that period:



I just then applied some simple maths to work out how much time the beacons frames consumed over a period of one second.
I then tabulated the results:

Number SSIDs Broadcast
Lowest Mandatory Speed
Beacons AirTime Over 1 Sec (mS)
Percentage AirTime Used by Beacons
1
1Mbps
25
3%
7
1Mbps
167
17%
15
1Mbps
326
33%
1
11Mbps
10.5
1%
7
11Mbps
73.5
7%
15
11Mbps
158
16%
1
54Mbps
1.52
0%
7
54Mbps
10.6
1%
15
54Mbps
20.8
2%

The results show pretty much what I expected, but I was surprised by how little time the beacons consumed, particularly once the lowest mandatory speed is ramped up to 54Mbps. They certainly don’t support the information that had been imparted to me regarding 7 SSIDs consuming 50% of all air-time.

You can clearly see the effect of adding more SSIDs (and consequently more beacons). As more SSIDs are added, more air-time is devoted to beacon traffic. This is a bad thing, if it becomes a significant chunk of your air-time.

You can also clearly see the effect of increasing the lowest mandatory speed supported by the wireless network. Once you increase it to 54Mbps, even with 15 SSIDs, you are only consuming 2% of the available air-space.

I suspect that the conventional wisdom of keeping your SSID numbers down to below 5 is founded on the assumption that many wireless networks are going to be installed using default settings. Often, default settings will configure the lowest mandatory speed to one of the lower 802.11b speeds, which could then make significant numbers of SSIDs an issue.

For me there are several lessons to take away:

  • Verify what the defaults of a system are – what is the lowest mandatory speed configured on your system out of the box?
  • Increasing the lowest mandatory speed on a wireless network is going to increase the efficiency (and hence throughput) of your wireless network significantly – less time will be given over to beacon traffic
  • The ‘less than 5 SSIDs’ rule may be a good starting point, but on a well engineered network, it may not be as relevant as it used to do, especially in the presence of modern wireless clients which do not need to support the lower, legacy speeds of 802.11b/g.

A word of caution though before making any wholesale changes to your network. Make sure you do not have any older wireless clients that need to be able to connect to the network at the slower/legacy speeds. Clients need to be able to initially associate to a wireless network at the lowest mandatory speed supported by a wireless network. If you have older devices that are not in areas that have good coverage, they may not be able to associate at a higher speed and will not be able to join the wireless network in those areas. It is probably worth testing the effect of any changes you make carefully.


I’d welcome any feedback on my testing. If there are any flaws in my logic or testing or there are other considerations I may have missed, then please feel free to drop me a note or comment.

Wednesday, 14 August 2013

5GHz Unlicensed WiFi Channels in the UK - White Paper

(Note: this white paper has been superseded with this new updated version)

I put together a few articles a few months ago talking about how the unlicensed 5GHz band is used for WiFi here in the UK.

I thought it might be a good idea to consolidate all of the information that I found in to one place, so that people researching the topic could find and digest it more easily.

Therefore I put together a white paper about how 5GHz is used for WiFi here in the UK. You can download it from here.

There will no doubt be errors, omissions and other facts that folks would like to suggest. So, please feel free to drop me a note and I'll update this document from time to time to improve the quality of information that it contains.

Nigel.

Download the document from the following sources:

Monday, 10 June 2013

802.11ad - Just for Home Cinema...Right?

One of the things I love about Twitter is that once in a while you stumble across something that completely shifts your view of the world. I spotted this little nugget (posted by @wifichef) a couple of days ago, which made me significantly re-assess my view of the application of 802.11ad technology:
"A deeper dive in to High Capacity WLANs: http://t.co/L6kcx5oMI9"
Expecting another deep dive in to 802.11n high density WLANs (...small cell sizes, using 5GHz, band steering, disabling lower speeds etc.) I clicked through the link to see if I could find any new information. However, I was completely surprised to find myself looking at a whitepaper discussing the merits of 802.11ad! In fact, it actually highlighted the disadvantages of a traditional 'legacy' WiFi network - this had me hooked :)

I must admit that I had dismissed 802.11ad (which uses the 60GHz band) as a niche technology that I'd probably hardly ever see in the Enterprise environments that I tend to work in. (I must admit to having only a superficial knowledge of the 802.11ad standard though). After all, what use is a wireless technology that can only travel a few feet, particularly when you have a building of maybe hundreds or thousands of people? How could we ever design usable WiFi networks with cell sizes that small!? You might see it on some consumer-grade wireless routers, perhaps for movie streaming in the home. Beyond that...nah, I just didn't see  it taking off.

But, after reviewing this whitepaper from Wilocity, I had to pause and re-assess my view of 802.11ad. It details testing done in high density client environments using 802.11ad-capable latops. They posted some very impressive link speed and SNR results for a 'high density' of 802.11ad stations in close proximity. This was achieved by deploying a number of 802.11ad laptops, each of which had an 802.11ad wireless docking station next to it on the desk. Each desk had an equal number of laptops and docking stations around the edges of the desk (as you might expect to see in a typical office).



My first reaction was: "Well why would you do that? If each docking station is cabled anyhow, why not just pull the network cable out of the docking station and connect it in to the laptop!?". But, after some thought, I started to consider how this technology might advance in the future... 

It looks like we only have per-laptop 802.11ad docking stations at the moment (which obviously doesn't save you much in cabling, assuming each docking station is cabled). But,  perhaps wireless equipment vendors might be able to manufacture per-desk 802.11ad access points in the future, for just the users occupying that desk? If that could be coupled with fast transition to existing 802.11n/ac office-wide networks, then as users roam about the office, they could hop between 802.11n/ac & 802.11ad networks. This would provide high speed 802.11ad at a user's desk, with lower speed 802.11n/ac as they use the traditional office wide network whilst moving between desks and rooms.

These super-small cells could make the planning of high-capacity wireless networks much easier in office-type environments. Just put an AP on each desk, together with a token blanket of traditional (ceiling-mounted) WiFi coverage to provide slower-speed transit connectivity as users move around. That would certainly make HD wireless surveying a lot more straightforward!

I have no idea how much of this will be technically possible, but I could certainly see the attraction of this type of super-small cell. It is much more akin to the provision of desktop hubs or switches that regular wired network engineers could get their head around, making support and planning much easier than traditional WiFi networks. This would obviously require more cable drops around the office, but there may be some environments where the additional cabling is worth the trade-off for the additional capacity and ease of deployment.

It is going to be fascinating to see how 802.11ad evolves and whether it could introduce yet another paradigm shift around WiFi networking.

Tuesday, 28 May 2013

Cisco ACS Policy Decisions Based on SSID Name

If you're using an authentication server (such as Cisco's ACS) to make policy decisions about wireless users, there may be times when you'd like to make a decision based on the name of the SSID that the user is joining. In this article, we'll look at how you can do this.

In this article, I'm going to assuming that we are using a Cisco wireless LAN controller, together with a flavour of Cisco ACS 5.x. I've seen this method used with Cisco ACS 4.x (see references at the bottom of this article) and wouldn't be surprised if you could modify the technique for other RADIUS servers. When Googling about this subject, I don't see any results that show how to do this in ACS 5.x, so thought it was worth a quick note.

Background

In brief, when a wireless client is attempting to authenticate to an SSID on a Cisco WLC network, if 802.1x is being used to authenticate users, then various RADIUS attributes are sent to the RADIUS server (e.g. ACS) as part of the authentication handshake between the WLC and RADIUS server. One of the attributes that is sent forward to the RADIUS contains information about the source SSID. This attribute can be leveraged in the policy/decision process for authenticating users.

The attribute we are interested in is the DNIS (Dialed Number Identification Service). If we inspect this attribute in the ACS logs, we can see that the DNIS contains the MAC address of the RADIUS client (the WLC), together with the SSID name. Here is a screenshot that shows an authentication record for a client (the SSID is called "home-8021x")



Having the MAC address and SSID mixed together like this is not that useful, but by using a wildcard character to ignore the MAC address segment of the field, we can look at the SSID name. Luckily for us, the '*' character can be used to wildcard the leading part of the field (i.e. the MAC address) and allow us to just look at the SSID segment that we wish to use for our decision making process.

To be able to include the SSID in the decision making process, we have to create an End Station Filter within ACS that we can drop in to our policy rules. Here is the ACS page that we are interested in:



To match on our SSID of interest ("home-8021x"), we have to configure the end station filter as shown below (note the use of the "*" to act as a wildcard for the MAC address of RADIUS client):



Configuration Example

Once we have our End Station Filter in place, we can make some very granular decisions around authentication and authorisation of users trying to join the SSID. One area that this is very useful for is defining Service Policies. We can tie a service definition to a particular source SSID.

For reference, here is a screen-shot from the Cisco WLC GUI, showing the SSID name that we are using:



In the example below, we have a service policy called "Auth_Users_Select". You can see from the highlighted field that only authentication requests that are received from our sample SSID will hit this policy entry, as they must match the End Station Filter that contains the SSID information. 




If you are looking at your own copy of ACS whilst looking at the example shown above, you may not see all of the fields shown in this screen-shot. You may need to customize the fields shown by hitting the 'Customize' button shown at the bottom right of the screen-shot.

Caveats

There are a couple of caveats to bear in mind:
  • SSID names are case sensitive. If you aren't hitting the policy line you expect, double check the case of the SSID name in your end station filter, compared to that on your WLC
  • Bear in mind that you are using a wild-card in the end station filter. If you have similar-named SSIDs, make sure you are not filtering out an important part of the SSID name inadvertently
Conclusion

Using End Station Filters in this way gives you a powerful way of making decisions around both authentication and authorization decisions.

Particularly when considering service policy decisions within ACS, being able to associate a service with a particular SSID is valuable - each service (e.g. Corporate-data, Corporate-voice etc.) may be authenticated by their own methods using this technique.

It's also worth noting that the same effect may be achieved using the WLAN-ID RADIUS attribute which provides the (numerical) index of the SSID (this is often suggested in other configuration examples I have seen). Although using the SSID index (via the WLAN-ID RADIUS attribute) is  a valid method, unless there has been a policy of assigning the same SSID to the same index on every WLC in the network, then this may not be a feasible approach.

References

The following documents provide some very valuable references, particularly if you are looking for the ACS 4.x version of this technique:

Wednesday, 22 May 2013

Aruba Tech Field Day - 802.11ac Product Announcement


Yesterday was the official launch of Aruba's journey in to the world of 802.11ac with their online (and real-world) Tech Field Day event where they presented their products and strategy for 11ac. I was a virtual participant, watching from over here in the UK. I have to say up-front that I do not currently supply or support Aruba products, but was very interested to hear more about their views on 802.11ac, together with their product offering. There was a lot of ground covered, but here a few (brief) notes of things that I found of particular interest for the sessions I managed to view.

802.11ac

There was a very informative and lengthy discussion around 802.11ac technology, together with the lessons learned by Aruba in their testing to date. I won't cover all points here, but the headlines that stuck in my mind were:

  • Smartphones/tablets will continue to be primarily single stream, capable of 80MHz bonded channel support
  • Although 11ac brings significant speed advances, it's not the speed itself which is the advantage, it's the increase in efficiency of the WLAN which is the big win. The faster a client can get on the WLAN, send its data, and then get off, the more clients will be able to use the WLAN (which remains a shared medium). This will help to meet the growing demand for WiFi capacity
  • The highest "headline" speeds of 802.11ac will only be achieved by clients being in close proximity to an AP. The more complex modulation and signal processing required means that excellent signal quality (e.g. through excellent SNR levels) to achieve the higher speeds.
  • Sticky clients continue to be an issue (see "ClientMatch" below), which can significantly impact the efficiency of a high density WLAN. As client speeds drop as they move away from an AP, causing a bottleneck for other clients on the same AP that they are "stuck" to.

ClientMatch

A mentioned above, Aruba highlighted the ongoing issue with "sticky clients". These are clients that initially associate with an AP and then, despite 'better-choice' APs being available, remain associated as the client moves away from it. The issue with this is that the client speed will drop as the signal level falls, so that it is transmitting at lower speeds and impacting the efficiency of other clients using that same AP. If  clients can be made to prefer (and roam to) a more local AP, that will facilitate a better link speed and WLAN efficieincy, together with all of the efficiency gains that this will bring.

Aruba have a 'patented' technology, called ClientMatch which apparently looks at things from a client point of view and can 'steer' clients to better-choice APs. This technology can apparently work for all types of (legacy) client, but works best on more recent types of client that support newer 802.11 features such as 802.11k/v.

Reading between the lines, I think it analyses signal quality and signal levels from clients (at the AP end of things), together with 802.11k/v information to make some decisions about how things are looking from a client's point of view. Then, somehow (presumably through band steering and ignoring probes etc. from clients below pre-defined signal thresholds) it 'steers' them to a better AP.

We all know the types of information that are available to an AP from clients and the limited amounts of information that clients themselves will supply or act on. So, it's hard to see what there is here to actually patent. Unless there is some type of agent installed on the client (which there isn't in this case), what other unknown (patentable) mechanism could possibly be at work here? Apart from some 'secret sauce' decision making from the WLAN point of view, using known measurements and techniques, it's hard to understand what can be so unique about this. To be honest, without more information, I'm struggling with the concept of a unique offering here...

Access Points

Early on in the proceedings, Aruba presented their new 11ac access points: the AP220. There are two models, one with internal and one with external antennas. The units they had on display at the event were very impressive looking. They looked relatively light-weight (judging by the way they were being handled) and they appeared pretty sleek, as they have now dispensed with the usual air-vents to meet the 'wipe down' needs of the healthcare sector.

The AP has 2 Gigabit Ethernet ports to cater for the theoretical throughput speeds that can achieved by an AP with 2 radios (1 x 5GHz 11ac & and  1 x 2.4GHz 11n). I didn't quite get the information around the power requirements (i.e. 802.3af vs 802.3at), but looking at the datasheet, it looks like it can run in an 'Efficient Mode' with an 802.3af POE port and full funtionality mode with an 802.3at POE port. The detail on how the 2 gigabit Ethernet ports will be used (i.e. load balancing or a true trunk) was also unclear (this is still TBA on the datasheet I am looking at...).

The guys from Aruba pointed out that the AP uses less power than 'other solutions which require a plug in module' (i.e. Cisco). But, from what I see so far, both solutions require a full 3at POE port to operate with full functionality, so, I don't really see an advantage there..?

They also said that in their experience to date, you can pretty much do a one for one swap out of 11n APs for 11ac, as the general coverage patterns were broadly similar.

Lync

A Microsoft representative provided an excellent presentation around the Microsoft Lync product, which was fascinating for those of us (OK, maybe just me) who aren't that familiar with Lync. He described the challenges of trying to prioritise the different traffic flows that originate from a Lync client.  

The Lync client itself may be a number of form factors (laptop, tablet, phone), which may be the source of data, voice or video traffic. Trying to configure the QOS requirements for just those traffic types is a challenge, but throw in the fact that there is signalling traffic, some traffic is encrypted and some protocols do not use well known port ranges, and you have a whole heap of trouble.

He then went on to describe how Aruba have become a certified Microsoft Lync partner, allowing them to have access to an API that is made available from a Lync server. Having access to the API means that the Aruba wireless LAN controller can exchange information and drill down in to the detail of each Lync session, allowing decisions (such as QOS prioritisation) to be made as required. Apparently, Aruba and Microsoft have invested a lot of time in testing this exchange of information across the API, which allows better decisions to be made by the wireless network and provides much richer information to be made available on the wireless management platform.

This foray in to the Lync API was described as Aruba's move in the to brave new world of SDN: the application providing information for the network fabric to provision the resources required for an application. Exciting stuff!!! It certainly sounds like a good strategic move for both organisations. but I wonder if an open, standards-based API would be a better way to go long term (...imagine an unique API for every different application)? This (SDN) isn't an area of expertise for me, so please excuse any comments which seem mis-guided and please attribute them to my technical ignorance :)

Finally

Finally, I have to say thanks to Aruba, Microsoft and Gestalt IT for providing such open access to this event. The information presented was both valuable and fascinating. 

The event itself was very well organised (as you might expect from those Tech Field Day boys), with a great deal of both vendor material (from Aruba/Microsoft), together with the opportunity for some challenging questions from vendor-neutral folks as well. I think this open-forum approach really shows that Aruba is ready to listen, as well as very generously share, which really raises my opinion of them significantly (...not that it was low before, :) I've had no previous dealings with Aruba) .

I believe that Aruba are also going to make some of the material available from the presentations, in the form of videos and some of the slide decks - I particularly look forward to the recommended Lync QOS settings that were shared :)  I also strongly recommend that you review some of the great material that Aruba have made available about 802.11ac and their products.

This has been another very interesting and exciting chapter in the unfolding 802.11ac - thanks to Aruba, Microsoft & Gestalt IT.

References