Wednesday, 28 May 2008

IP Spoofing

Today’s subject is IP Spoofing and before you ask no I’m not going to tell you how it’s done, you’ll just have to use google to find that out for yourself.

So here it is RPF (reverse-path forwarding) the Unicast RPF feature helps to mitigate problems that are caused by the introduction of malformed or forged (spoofed) IP source addresses into a network by discarding IP packets that lack a verifiable IP source address.

These will reduce a number of attacks methods rely on falsifying the traffic source to create a denial-of-service (DoS) When enabled, the device checks the source address of the packet against the interface through which the packet arrived. This will help defend your network from spoof packets that are causing problems.

Note: Unicast RPF should not be used on interfaces that are internal to the network.

Verifying the source address of IP traffic against routing rules reduces the possibility that an attacker can spoof the source of an attack and packets are dropped if the device determines, by verifying routing tables, there is no feasible path through the interface for the source address.

Enabling reverse-path verification in environments with asymmetric routes can adversely affect network traffic so be careful about the environment you use this in but for some 80% of you this should not be an issue.

So here is the commands.

On a Cisco ASA

interface {interface_name}
ip verify reverse-path interface {interface_name}


On a Cisco Router the command is

ip cef distributed
interface {interface_name}
ip verify unicast reverse-path


But there are some other points to the router that you need to know.

Enables CEF or distributed CEF on the router. Distributed CEF is required for routers that use a Route Switch Processor (RSP) and Versatile Interface Processor (VIP), which includes Unicast RPF.

You might want to disable CEF or distributed CEF (dCEF) on a particular interface if that interface is configured with a feature that CEF or dCEF does not support.

In this case, you would enable CEF globally, but disable CEF on a specific interface using the

interface {interface_name}
no ip route-cache cef


which enables all but that specific interface to use express forwarding. If you have disabled CEF or dCEF operation on an interface and want to reenable it, you can do so by using the

interface {interface_name}
ip route-cache cef


command in interface configuration mode.

Also use access lists with RPF to log or drop packets using the ip verify unicast reverse-path {list number}

In this next example the logging option is turned on for the access list entry and dropped packets are counted per interface and globally. Packets with a source address of 172.16.101.100 arriving at interface S0/1 are forwarded because of the permit statement in access list 197.

Access list information about dropped or suppressed packets is logged (logging option turned on for the access list entry) to the log server.

interface s0/1
ip verify unicast reverse-path 197

access-list 197 deny ip 172.16.101.0 0.0.0.63 any log-input
access-list 197 permit ip 172.16.101.64 0.0.0.63 any log-input
access-list 197 deny ip 172.16.101.128 0.0.0.63 any log-input
access-list 197 permit ip 172.16.101.192 0.0.0.63 any log-input
access-list 197 deny ip host 0.0.0.0 any log


But the access list is all up to you as to how you configure you RPF as you'll know more about the packets your expecting on your network but I would suggest you always keep the logging option on at first till you happy with your setup.

Also keep an eye on the CPU load of the router and ASA as these option can use the CPU if you have a fast connection with lots of traffic going over it.

Sunday, 25 May 2008

Cisco Firewalls NTP

I had a comment this week about the NTP commands I posted didn't work some of you, after a quick investigation I found the problem is your talking about Cisco ASA (Adaptive Security Appliance) both these and PIX (Private Internet Exchange) do not have the same commands as Cisco routers, and previously I was talking about Cisco Routers.

So I'll try my best to keep my postings clear about what Cisco Appliance I'm talking about, so to recap this posting is about ASA and NTP.

In a simple model you could just use the IP of you NTP server and the interface that its on.

ntp server {ntp-server_ip_address} [source interface_name]

This would be enough in most networks where you are talking the time from the local NTP ether linux, unix or windows server.

However in large enterprise or where the time server is external I strongly recommend you use md5 encryption, other wise people can send time packets to the device that will confuse the time on the device and make tracking a real attack very hard.

ntp authenticate
ntp trusted-key {ntp_key_id}
ntp authentication-key {ntp_key_id} md5 {ntp_key}
ntp server {ntp-server_ip_address}{key ntp_key_id} [source interface_name]


This might sound like allot of work for one service but remember every service that is not locked down is a threat to your network, as it can and will be used against you, NTP might not sound very dangerous but it very useful for your attacker to be able to confuse you as to when the attack really took place.

Tuesday, 20 May 2008

PC Imaging Vs PC Backups Vs Remote Operating System Deployment

The main ways to keep your users workstation and notebooks running is never liked by them but as I say read network usage policy.

You know the drill Lockdown the install functions and removes any and every thing that is not needed. Control what can be seen on the internet and if they can't find a business reason for it don't allow it. Don’t let them connect devices to the PC or network without express permission from the network administrator.

However this keeps them from crashing it by installing disruptive programs, but what about in the advent of hardware failure. Now on the market there are three main ways to get around hardware failure at the moment the most popular one at the moment is imaging.

Imaging
Imaging creates a file that contains the exact copy of the data on that drive at that time. So main things to note are that you will need to have enough space to have these images on the network as if they are on the local PC and the drive fails you’ve lost everything, so that might be as much as 5Gb per workstation and notebook and that after compression plus incremental .

Second point is that in a domain computer accounts passwords change after 90 days so if the image is older than 90 days is of no good if its imaged while connected to the domain, however Symantec have a option with their ghost to disjoin the PC before the backup and rejoin after its finished.

Third point is that the restore has to be done to similar hardware, this is where Acronis has the upper hand at the moment with their universal restore making it easier to restore to other hardware, note I said easier not easy there are still some hardware that you will have lots of problems with due to drivers.

Fourth point is that when you have allot of hardware that is the same you can build on imagine and just deploy it to all of the PC’s by CD or Network speeding up large builds.

PC Backup
PC backup use the same basic idea as imaging but backing up to tape or SAN, this again means that you need enough space to back up all your PC’s to tape or SAN so this again might leave you with a space issue.

Second point is that you have to restore to similar hardware. But there are some tools out there to help you with this depending on your backup solution and sadly there are just to many for me to cover them so I won't.

Third point is Backups don’t really help you deploy new PC’s to the network, so you are still left with having to use one of the other two if you want to speed up your installs.

Fourth point is remote backups notebooks might not be onsite when you want to back them up, now some backup software will work over a VPN link while some others will not handle this speed keep this in mind when picking a backup, while imaging can be done to a local hard drive and copied to the network when available.

Remote Operating System Deployment
Last but not least is deploying the Operating System from a network location, so if something goes wrong with the hardware you just install a fresh build on new hardware or the repaired hardware, and these install can be custom to install all your applications. Again space on the network to hold the installs is needed but less than the imaging and backup options, as an example a custom vista install is about 7GB depending on the drivers you place in the package it might also handle all you hardware from one build.

Second point is that you can if users store information on their desktop and you don’t have roaming profiles they will lose it, when a fresh install is done so remember this as this is the drawback to doing fresh builds every time. If you are using roaming profiles you'll notice that because you have profiles and not full content of the hard drive you'll save more than 20% because you don't have the Operating System files.

Third point is that you will have to update this build as often as your hardware changes and if you are buying only a few PC’s at a time different hardware vendors you’ll end up have to create or more builds to cover all your hardware, this can be very time consuming.

Now the over view
So I know what your dying to ask, what one would I use. Well there is no good answer to this as depending on the network and SLA for getting the PC running again the best one might change.

Personally I would say that Remote Operating System Deployment is a better choice if you've roaming profiles and a secure domain, but if you have local profiles only then you have to look at Backup or Imaging and see what one works better in your environment. And lastly the cost of each as none of these are free but all of them cost affective as its better than spending time building PC.

In short a mix of these should be used and sometimes even all three, however word of warning don't make your life harder than it has to be find one product in each area your happy with and use it don't mix and match, remember your doing this to free up your time not spend it reading product manuals.

Yes you hear the M word "Manuals" I know that we don't like reading them but with this kind of thing its best to find all the options to the product before deploying it.

I'll leave you with this as a finale example Backup for workstation and server, remote operating system for new builds and imaging for notebook.

SQL Transaction Logs

SQL Database transaction logs can become quite large to put it politely if you are running full recovery model on the database but to be honest if not monitored closely, the other option is to put it in simple I don’t know of a company that would use simple as who would say that their database is not so important.

So how do you keep the transaction log from getting massive? Well the simple fact is that you need to backup the transaction log, but this just clears the inactive transaction it doesn’t shrink it so the question is what are you looking forwhen you say it’s getting to big, because you have two kinds of space to think of.

Space on disk (the physical space occupied by the transaction log)
and you have white space (the white space inside the transaction log)

White Space is what happens when you back up the transaction log is clears content. as its been backed up but it does not shrink the transaction log so the physical disk space doesn’t change however you can now fill the white space within the transaction log with new information and changes.

Depending on the way you grow your databases by auto grow or manually adding space as you go along as to how this will affect you.
If you manually grow the transaction log then you will need to make sure you backups happen frequently enough that you are not having to grow the transaction log every few days or hours.

Good rule of thumb is to backup the transaction log files regularly to delete the inactive transactions in your transaction log.
Design the transactions to be small.
Make sure that no uncommitted transactions continue to run for an indefinite time. Schedule the Update Statistics option to occur daily.
To defragment the indexes to benefit the workload performance in your production environment, use the DBCC INDEXDEFRAG Transact-SQL statement instead of the DBCC DBREINDEX Transact-SQL statement. If you run the DBCC DBREINDEX statement, the transaction log may expand significantly when your SQL Server database is in Full recovery mode. Additionally, the DBCC INDEXDEGRAG statement does not hold the locks for a long time, unlike the DBCC DBREINDEX statement.

Note: In Microsoft SQL they are planing to change the command in later versions with ALTER INDEX statments.

Simple
The simple recovery model allows you to recover data only to the most recent full database or differential backup. Transaction log backups are not available because the contents of the transaction log are truncated each time a checkpoint is issued for the database.

Full
The full recovery model uses database backups and transaction log backups to provide complete protection against failure. Along with being able to restore a full or differential backup, you can recover the database to the point of failure or to a specific point in time. All operations, including bulk operations such as SELECT INTO, CREATE INDEX and bulk-loading data, are fully logged and recoverable.

Bulk-Logged
The bulk-logged recovery model provides protection against failure combined with the best performance. In order to get better performance, the following operations are minimally logged and not fully recoverable: SELECT INTO, bulk-load operations, CREATE INDEX as well as text and image operations. Under the bulk-logged recovery model, a damaged data file can result in having to redo work manually based on the operations that are not fully logged. In addition, the bulk-logged recovery model only allows the database to be recovered to the end of a transaction log backup when the log backup contains bulk changes.

So once again, based on the information above it looks like the Full Recovery model is the way to go. Given the flexibility of the full recovery model, why would you ever select any other model? The following factors will help you determine when another model could work for you:

Select Simple if:
Your data is not critical.
Losing all transactions since the last full or differential backup is not an issue.
Data is derived from other data sources and is easily recreated.
Data is static and does not change often.
Space is limited to log transactions. (This may be a short-term reason, but not a good long-term reason.)

Select Bulk-Logged if:
Data is critical, but logging large data loads bogs down the system.
Most bulk operations are done off hours and do not interfere with normal transaction processing.
You need to be able to recover to a point in time.

Select Full if:
Data is critical and no data can be lost.
You always need the ability to do a point-in-time recovery.
Bulk-logged activities are intermixed with normal transaction processing.
You are using replication and need the ability to resynchronize all databases involved in replication to a specific point in time.

Note: will full recovery you will need to backup the transaction log frequently to prevent it growing out of control, if you don’t do this you will need to manually shrink it with TRUNCATE_ONLY command.

BACKUP LOG database WITH TRUNCATE_ONLY not recommended!!
After backing up the log using either NO_LOG or TRUNCATE_ONLY, the changes recorded in the log are not recoverable. For recovery purposes, immediately execute BACKUP DATABASE. So you will be unable to restore to point in time other wise.

So to make it clear after all this you should do the following.
Best practice and way to prevent having to run the NO_LOG or TRUNCATE_ONLY commands is to put db in recovery model best for your database.

Do regular log backups to keep the transaction log a reasonable size, and run a maintenance plain that shrinks the log files and database on your server regularly, as well as reindexing and other fine tuning tasks.

Tuesday, 6 May 2008

IIS Security

As much as I hate to say it information technology is a young man’s sport and while admitting to this I also admit that I am no longer as young as I once was, however I am by no means an old man just yet. Still you tend to notice the grandpa Simpson syndrome among the older IT manager and staff as they harp back to how it was in their heyday… these stories often have no relevance to your current predicament and do little other then waste your time if you both to listen to them. These days it must be said that when someone starts to talk about how things where 10 years ago I tend to cut them short by saying “well since the advent of stun guns I’ve not needed to listen to stories about the dark ages” When said with enough menace in the voice they tends to stop talking and they leave me to work in peace.

Now that you understand what the grandpa Simpson syndrome is you’ll undoubtedly notice it around your work place in the coming moments after reading this.

Ok now to get down to today’s lesson, internet information server if you have the need to run IIS and not apache mores the shame because it’s not as stable as apache but anyway here are some basic things you can do to improve the security.

First rule, don’t use the default web site for anything other than admin purposes, there is lots of information freely available on the web about removing virtual directories and other services from it but personally I fine these services useful and so may you, so my recommendation is to change the site to windows authentication and to set access to a web administrators groupand permit access only from trusted IP range so that you can continue to use it safely.

Second rule, remove services that you’re not using such as NNTP, FTP and SMTP as you most likely we’ll not use them on 70% of all sites those that do use them make sure that you lock them down, the URL scanner available from Microsoft as part of the IIS Lockdown tool. With FTP and SMTP services you need to look at how to secure them, with FTP this is most easily done with isolated user.

Note: The MetaBase is designed as a repository for Internet Information Services configuration values. In IIS 6.0, the MetaBase is contained within the following files: MetaBase.xml and MBSchema.xml in the systemroot\System32\Inetsrv folder. The MetaBase.xml file stores IIS configuration information. Additionally Microsoft provides tools such as MetaEdit and adsutil.vbs which can be used to view/edit settings directly

To add isolated users to an FTP site using Active Directory Mode so that users are authenticated against Active Directory set the following properties in the metabase:

1. Set UserIsolationMode to 2

2. Set ADConnectionUserName to the user (Domain\UserName) who has permissions to read Active Directory properties

3. Set the DefaultLogonDomain

4. Set AccessFlags properties, for example: AccessFlags=AccessRead|AccessNoPhysicalDir


This will make your FTP more secure but remember to make sure the account you use does not have Domain Admin rights or you’ll have just left the barn door open for the world to come in.

Ok let’s move on to Securing SMTP this involves requiring users to authenticate to the SMTP server before relaying messages and only permitted computers to relay.

1. In the IIS Manager, right click on the SMTP virtual server and choose Properties

2. Select the Access tab and under Access Control click Authentication.

3. Select the Integrated Windows Authentication checkbox

To add relay restrictions to the SMTP virtual server, perform the following steps.

1. In the IIS Manager on the Access tab, click Relay

2. In the Relay Restrictions box choose Add.

3. To add a single computer, click Single computer, and then type the IP address of the computer to add, and then click OK to add a group of computers that is to say a subnet of computers click group and for domain enter the name of the domain.

Note: TLS can also be used but unless you are looking to use certificates as part of a site or domain wide policy I do not recommend going to the extra trouble of setting it up.

Third rule, I’d like to talk about Weblogs as these are often over looked by people that are new to web site administration, and there are some key points to remember first of all enable extended logging, most people do not do this and find that when something is going wrong or needs investigation they can’t because the events they want where not logged, in the event of anonymous access sites you may want only to log some simple things but on SSL sites you may need to log everything due to the sensitive nature of the information. The last point about weblogs that I’ll mention is thing about retention policy, as you might need to have these logs if you are a legal or financial related business and these log files can be over a gigabyte a day on busy sites so storing those site log files for just one month might equal 32 gigabytes and if you have to store them for 6 month or more you can begin to see how this can become a space issue as most web servers do not have large harddrives. So look at compressing this data as an example these text log file compress by as much as 90% using winzip and other such programs, you can also make a scheduled task to delete or compress these file or you can find some free third party ones on the net, but remember whatever you choose should be standardized in your setup.