Friday, 12 December 2008

Write your own system stored procedure in SQL Server 2005 and 2008

I’ve found that writing stored procedures is always top of a DBA list of things to do in his or her spare time.

But sometimes I find that you write something so useful you want to use it on every database not just one, to do this you need to place it into the master database and here is how you do it

Use Master
--incase it already exists
IF OBJECT_ID('dbo.sp_pagelocks', 'P') IS NOT NULL
DROP PROCEDURE dbo.sp_pagelocks;
--my new procedure
CREATE PROCEDURE dbo.sp_pagelocks
select * from sys.indexes where allow_page_locks=0

you can now test it by going to another database and executing it.

Use mytestdb
Exec sp_pagelocks

You see things attached to the dbo on the master database are automatically accessible by all the database but remember that this works both ways, it means if you write a delete or table update statement it can be used on all database maybe even on one you don’t want to change things on so be careful what procedures you write into the master database.

Word to the wise try to use it only for system related procedures.

Saturday, 6 December 2008

Enabling Application Inspection Using the Modular Policy Framework

Cisco ASA provides a modular policy framework to provide the application security or to perform quality of service, and yes that is quite a mouth full to say but MPF for short.

On large environments it can be sometimes be useful to have a policy to inspect the traffic for you to prevent denial of service attacks.

I'm going to walk show you step by step how to do this, I'm going to create one service policy for inspecting for FTP and DNS so you can see how this is done.
First we need and access list to check the traffic.

Access-list udptraffic permit udp any any

Next we need to create a class map – classifies the traffic that will be inspected. Various types of match criteria in a class map can be use to classify traffic. The primary criterion is the use of an access control list.

class-map UDPclass
match access-list udptraffic

Next we are going to create a policy map – configures security or QoS policies. A policy consists of a class command and its associated actions, additionally a policy map can contain multiple policies.

policy-map udppolicy
class UDPclass
inspect ftp
inspect dns maximum-length 512

Lastly you need to add this to a service policy - A service policy can be mapped to global meaning all interfaces or to a single interface directly.

service-policy udppolicy global

Your final result should look like this.

Access-list udptraffic permit udp any any
class-map UDPclass
match access-list udptraffic
policy-map udppolicy
class UDPclass
inspect ftp
inspect dns maximum-length 512
service-policy udppolicy global

In addition to creating these Cisco have included some default class map and policy namely inspection_default and asa_global_fw_policy this can be seen below

class-map inspection_default
match default-inspection-traffic
policy-map asa_global_fw_policy
class inspection_default
inspect dns maximum-length 512
inspect ftp
inspect h323 h225
inspect h323 ras
inspect netbios
inspect rsh
inspect rtsp
inspect skinny
inspect esmtp
inspect sqlnet
inspect sunrpc
inspect tftp
inspect sip
inspect xdmcp
inspect http
service-policy asa_global_fw_policy global

You are welcome to use the default policy and for most of you it will work fine I'm sure but i find that creating my own policy works better for me.

Sunday, 30 November 2008

File fragmenation

Here is something administrators and system architects offend over look is file fragmentation, many believe this doesn't impact their systems enough to be worth factoring this into the design but the long term affect can have high impact on the performance and even prevent service.

To prevent this we have to first understand how this happens, lucky this isn't rocket science in fact it quite simple imagine that your harddisk is a book shelf when all is working well it should be in alphabetical order, but if they isn't room in the place where the book should go it will take the first available place on the shelf, after doing this a few hundred times with the read and write of data you end up with many books in places that make it harder to find thus pushing the seek time up and on large files because the file can be located over many parts of the drive you can end up with I/O timeout on file read operations.

On SQL server it can lead to tasks and operations such as backups failing, on file servers is can lead to files not opening on first request and general complaints from users regarding performance of the servers.

One of the quickest ways to prevent this from happening is to not mix the data type and applications that have access to a volume.

If you have many applications placing data on a volume you can quickly have a fragmented drive, one of the best example I have come across was a backup of SQL on the same volume as the database is self, this doesn't sound like much of an issue at first however if i explain all will become clear.
The databases is growing and the backups are being recreated every day, so imagine you have no enough space when database is growing to create the data at the end of the file so it creates it just after the end of the backup then later that days the file backup is deleted and recreated and is also larger and fills up the same space plus some more at the end of the database again.
This results in what I call the leap frog affect as data is always adding at the end, and quickly the volume was 98% fragmented and SQL had I/O error with operations timing out because they where taking longer then 15 seconds.

I know what your thinking this was some old hardware and this is why it made such a big difference... well you'd be wrong this was new hardware 6th generation and the data was on SAN drive that as you know are faster then local drives by up to twenty times, and we are talking about a database that was only 120GB, now bearing in mind I've seen 2.8TB SQL databases on the same hardware without issue, so this really begins to show how much it can impact the server.

SQL Server has encountered 21 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [E:\mssql\data\testdb_log.ldf] in database [testdb]

On file servers the same can happen and it appears as SCSI I/O errors, where windows is not getting the data as quickly as it expects, now on file servers the problem is more complicated as you a file share is a file share there isn't allot that can be done or is there?

Well you can do some steps part from having regular defragmentation of the volume you can also split the shares so the application shares and user share are on separate volumes.

So to quick bullet points for you.

Separate System, Data, Log and backups for SQL servers.

Separate file shares into type of share and have regular defragmentation of those volumes.

Create volumes based on uses and consider how data will react with many write operations.

Tuesday, 11 November 2008

Maintenance task fails on reorganizing indexes

The most common problem with in house or bespoke databases is developers forgetting to turn page locks on, now this doesn’t sound like the largest problem, and can easily be fix with a simple query like this.


However the error output from job may not clearly tell you what database it came from if you have a large number of databases with similar or same table names in them or just to many to check each one by one then you'll be happy to know there is a quicker way.

So here is another the way around the problem, I have created a query that will list all the databases, tables and indexes that have no page locks making your task of correcting the problem much easier.



IF OBJECT_ID('tempdb..#tblServerDatabases', 'U') IS NOT NULL
DROP TABLE #tblServerDatabases
-- create temp table
CREATE TABLE #tblServerDatabases (DBName NVARCHAR(1024))

IF OBJECT_ID('tempdb..#tblDBindexes', 'U') IS NOT NULL
DROP TABLE #tblDBindexes
-- create temp table
CREATE TABLE #tblDBindexes (
DBName NVARCHAR(1024),
[index_name] NVARCHAR(1024),
[index_id] NVARCHAR(1024))

IF OBJECT_ID('tempdb..#tblDBtables', 'U') IS NOT NULL
DROP TABLE #tblDBtables

-- create temp table for finding object id in tables
CREATE TABLE #tblDBtables (
DBName1 NVARCHAR(1024),
[table_name] NVARCHAR(1024),
[table_id] NVARCHAR(1024))

-- insert into temp table and names of databses you don't want to list
INSERT INTO #tblServerDatabases (DBName)
SELECT [name]
FROM master.dbo.sysdatabases
WHERE [name] NOT IN ('model', 'tempdb')

SELECT @CurrentDB = min(DBName)
FROM #tblServerDatabases

INSERT INTO #tblDBindexes(
index_name, index_id)
EXEC ('USE ' + @CurrentDB + ' select [name] as index_name, [object_id] as index_id from sys.indexes where allow_page_locks=0')

UPDATE #tblDBindexes
SET DBName = @CurrentDB

SELECT @CurrentDB = min(DBName)
FROM #tblServerDatabases WITH (NOLOCK)
WHERE DBName > @CurrentDB

-- fill temp table with results of query for table object id
SELECT @CurrentDB = min(DBName)
FROM #tblServerDatabases

INSERT INTO #tblDBtables(
table_name, table_id)
EXEC ('USE ' + @CurrentDB + ' select [name] as table_name, [object_id] as table_id from sys.tables')

UPDATE #tblDBtables
SET DBName1 = @CurrentDB

SELECT @CurrentDB = min(DBName)
FROM #tblServerDatabases WITH (NOLOCK)
WHERE DBName > @CurrentDB


-- join temp tables and show results
select [dbname],index_name,table_name
FROM #tblDBindexes
join #tblDBtables
ON #tblDBtables.[table_id]=#tblDBindexes.[index_id]

-- Cleanup
IF OBJECT_ID('tempdb..#tblServerDatabases', 'U') IS NOT NULL
DROP TABLE #tblServerDatabases

IF OBJECT_ID('tempdb..#tblDBindexes', 'U') IS NOT NULL
DROP TABLE #tblDBindexes

IF OBJECT_ID('tempdb..#tblDBtables', 'U') IS NOT NULL
DROP TABLE #tblDBtables

This script has already filtered out the system indexes leaving you only the ones created by the designer from this list you should be able to find the database where the index creating the error in the reorganizing exists.

Sunday, 2 November 2008

Physical Security

Following on from the last posting we are going to talk about security, first physical security… now most of you think keys and locked rooms when someone says physical security or worse yet mace spray.

Sadly I’m not talking about your personal security, although you should take some steps there as well to secure it.

We are in fact talking about the access to servers and consequently your information, the majority of you will have taken some steps to lock your servers and switches away to prevent Joe public from touching them, but will this be enough to stop the experienced data theft?

Truthfully no as you have locked away the hardware but not what its connected too. As long as there RJ45 port comes out into the office somewhere any device can be connected to it.

So today we are going to talk about using 802.1x to stop unauthenticated PC’s being connected first of all, and move on to port security for the second part to reduce the chance of a (MITM) Man In The Middle attack.

To begin with you’ll need to setup IAS on your server and put the IP address of the switch and the passkey you intend to use for the switch and IAS service to authenticate with one another on, we will assume you have done this already. If you need help on this you can goto where you will find how to configure the IAS service.

In today’s example we’ll be using a Cisco switch 12.2 IOS using 802.1x
802.1x requires a device to authenticate itself with the switch before the switch will forward any packets to or from the device. This is an important first step for good network security.

aaa new-model
aaa authentication login default group radius
aaa authentication dot1x default group radius
dot1x system-auth-control
radius-server host auth-port 1812 acct-port 1813 key IASPaSSwOrd123

Now that the global setting is in place you’ll need to apply it to the interfaces, see below how I’ve configured interface 0/1 to use the server authentication.

interface FastEthernet0/1
switchport mode access
switchport port-security
switchport port-security maximum 3
dot1x port-control auto
spanning-tree portfast
spanning-tree bpduguard enable
speed auto

Now that we have set authentication on the port we can look at some other things we can do to improve the security still further.
With port security enable it causes the switch to learn the MAC address of the device plugged into the port. If the MAC address changes (e,g. someone plugs in a foreign machine) the switch shuts the port down

interface FastEthernet0/3
switchport mode access
switchport port-security
switchport port-security
dot1x port-control auto
spanning-tree portfast
spanning-tree bpduguard enable
speed auto

Now this is good but might not be the best way you might have decided to have more than one PC so you can allow the switch to learn more than one MAC address by changing the setting, in this case I’ve chosen for it to remember three MAC addresses.

interface FastEthernet0/1
switchport mode access
switchport port-security
switchport port-security maximum 3
dot1x port-control auto
spanning-tree portfast
spanning-tree bpduguard enable
speed auto

Another alternative if you don’t want to be manually turning the ports back on every time you change a bit of hardware is you can set a time out where the port will re-enable its self after a given time, this time is counted in minutes so as an example 600 minutes equals 10 hours .
Here is an example of port 5/1 re-enabling after 10 hours of shutdown.

set port security 5/1 shutdown 600
note this is set at a global level not at interface level.

This automatic re-enabling of the ports is good for preventing you from losing all the ports on the switch because of some re-cabling or some user plugging and unplugging some un-permitted device into the switch, but it defeats the point if the time-out is too low so try to keep it around the 8 hour or high mark as this means that a user that done something wrong will have to report it and a data theft will have only one chance a day to try and connect forcing him or her to wait hours or place a high profile call to helpdesk to get it re-enabled.

Saturday, 1 November 2008

Overview To Disaster Recovery

The majority people talk about disaster recovery and restoring the systems to similar or same hardware and the reason for this is almost 50% the fault of the operating system not being flexible to hardware changes and the over 50% is the fault of the backup software.

A good number backup software programs restores to the same hardware and leaves you to fix any hardware related issues, some backup software vendors have however have developed add-on modules that will do this for you and can drastically improve your disaster recovery time.

However we all know if you spend enough money anything is possible and I’m not writing about what software is best, as with all things there is more than one way to skin a cat and today I’m going to show you how solve the above mentioned problem by design.

Now the way we are going to approach this problem is one of design, we are going to admit that we have some weaknesses in our network and instead of relying on software to save us we are going to design our way out of the issues encountered.

Let’s start at the beginning in a live environment the very first thing you need to restore is the domain, now for a lot of disaster recover the domain is the make or break part if you can’t restore the domain then you have nothing as its pointless having an application of file that you can’t open.

“But I can open files without a domain I hear you say….” Oh dear comes my reply if you can open files without a domain then you have a rather large security problem but this is something I’ll address later in another posting.

So let’s get down to business we know that we need the most stable and flexible solution for our domain controllers. In a nut shell virtualization, domain controllers are generally processor heavy in large environments but not allot else, I’ve seen a 90,000 account domain run on a server with only 1GB of memory.

So since these servers are not so resource hungry they naturally lend themselves to being virtualized, this also removes dependency on hardware for the domain controller and can make it recoverable to any hardware that can run the image making you vendor independent for the hardware, hypervisor is the key word in all this.

Having one or more forms of virtual technology’s available to you also lets you do your own DR test and ion out the bugs without having the real disaster.
Virtualbox is a nice free one from Sun Microsystems if you wondering where to start or just don’t want to buy new software.

Network Access
Static IPs are not always a good thing and in a disaster they can be a real problem, example when hardware changes the new network adapter is detected and thus all IP addressing is lost, you have to options first is to use DHCP where possible and this I recommend to use as much as you can.

The second is to use netsh to backup and restore the IP configuration info to a common name such as “LAN” or some other common name for the interface so that all you have to do is rename the interface to LAN and execute netsh exec c:\ip.txt to restore your IP this is extremely handy if you have 6 or more network cards and you don’t want to spend half hour setting them up.

network card can often be an issue in a restore if you have the space keep a store of network card drivers on the system disk so you can easily reinstall the network card driver... its quite common for network cards to be intel based these days so the store of drivers shouldn't be large and this can be easy to implement.

Access to the network resource via IP and policy based security can also be time consuming to restore, so backing your IAS, RAS and DHCP scopes file using netsh so a way to save some time on the restore, after all you can make 30 or 40 clicks or you can just type netsh exec file.txt now this might not sound like that much of a time saver at first but when you have 8 to 10 server this can save more than an hour and cut the time spent down to less than a minute if you also have PSTools you can execute the restore simultaneously on all servers.

Scripting is an important part of IT not just documenting how tasks are done but using the tools to hand to reduce the time taken to do them.

Imagine for an example I wanted to install a windows hotfix on 50 workstation and I didn’t have WSUS or internet access on the LAN segment for whatever reason do I spend 10 minutes logging on to each workstation and installing said hotfix and ultimately lose several precious hours or do I sit at my desk and execute c:\pstools\psexec @C:\workstationlist.txt “\\fileserver\share\critical\patch.exe /quiet /passive /norestart” and thus never have to leave the my chair.

This kind of scripting can be used in restores to just as much efficiency, as it maybe need redeploy patches and updates that are missing from the restore media.
Remember whatever tools use choice to use, it should be use on the entire environment as differing restore techniques only add to the complications and time needed to restore the environment.

Now we have finished on the scripting side of things move on to the applications, using Active Directory based authentication is not just easier to centrally administer but also better for application restores as it saves time on restoring SQL based logins where this is mismatch in SID, along with file permission and other such lovely security identifier related issues, so make a check on all existing application that they use domain authentication and identify any that might be a problem to restore.

Another largely over looked point is some application are outside of backup software’s ability to restore so look into best practices of each software vendor you use and check the application has recover path as you might need to delete locked files and or restore the application using its own set of backup tools.

Sunday, 28 September 2008

Can not start windows 2003 cluster after restore from backup

I have recently been working with backup software and even using system state in windows i have come across problems with restoring windows clusters.
This is a detail of problems with open files not being fully backed up and when restored preventing the cluster from coming online the cluster registry keys are not restored

If the HKEY_LOCAL_MACHINE\Cluster hive is not available or has missing entry's the cluster service will fail, to restore these settings you will need to restore the hive by restoring the clusdb.

The clusdb holds the HKEY_LOCAL_MACHINE\Cluster registry keys the file its self is located in the %windir%\cluster\clustdb this file can be restored from the quorum drive.

To do this you will need to follow these steps

1. Boot node A to form the cluster (with node B powered-off).
2. Right Click on My Computer, select Manage, Click on Device Manager.
3. click on view and then click on Show hidden devices.
4. Click on Non-Plug and Play Drivers, double click on Cluster Disk Driver, click Driver and change the start-up option to Manual, click OK, and then click Close.
5. Click the Services, and then double-click Cluster Service and change the start-up to Manual click OK, and then click Close.
6. Reboot node A (this starts Windows on node A, without loading the Clusdisk.sys file)
7. Once node A is booted you'll be able to access the quorum drive. Copy the latest quorum_device:\Mscs\Chksequential_number.tmp file to %SystemRoot%\Cluster\Clusdb.
8. Open regedit by clicking Start, click Run type regedit and click ok.
9. Expand the registry and select the HKEY_LOCAL_MACHINE\Cluster key then select File and Unload Hive then select the HKEY_LOCAL_MACHINE and click file and load hive now browse to the clusdb file located under %windir%\cluster\clustdb.
10. At a command prompt, type net start clusdisk, and then press ENTER.
11. At a command prompt, type net start clussvc, and then press ENTER.
12. Restart node B to join the cluster, and replicate the Clusdb file from the sponsor node A.
13. On node A, after both nodes are running, Right Click on My Computer, select Manage, click Device Manager.
14. Click on Non-Plug and Play Drivers, Double click Cluster Disk Driver, click Driver and change the start-up option to Automatic, click OK, and then click Close
15. Click the Services, and then double-click Cluster Service and change the start-up to Automatic, click OK, and then click Close.

You are now finished and should have a working cluster once more.

Wednesday, 3 September 2008

SharePoint "Thread was being aborted"

"Thread was being aborted" while enabling Full-text indexing.

This is one of those little known areas of SharePoint, I can think of only to reason that you will come across this message, first is where you have a large SharePoint farm and you are enabling fulltext search for the first time/re-enabling it and the second is if you are enabling it on very slow hardware.

Are to actions you can follow to try to resolve this first is to do the following steps.

On the SQL server stop and start the:
"Microsoft Search service" and "MSSQLServer service"
then open SQL Query Analyzer and for each content database run:

exec proc_DisableFullTextSearch
exec proc_EnableFullTextSearch

This enables the FTS outside of Sharepoint. After these steps are performed, Sharepoint must still be configured to show the search box.
To do this, check the enable full-text indexing component in the central admin.

If the error still occures, increase the time-out value for the Central Admin site.
This can be done by increasing the "executionTimeout" value (default 600 seconds) in the web.config in
C:\Program Files\Common Files\Microsoft Shared\web server extensions\60\TEMPLATE\ADMIN\1033

The line you are looking for is (httpRuntime executionTimeout="600") as you can see the default is 10minutes I had to reset this to 4 hours on one SharePoint farm where the content was over 1TB

Saturday, 16 August 2008

Search Engines

Having spend sometime on internet searches over the last few months I'd just like to highlight some areas where common mistakes are made.

the first and by far seems to be the most common is cashing, you can speed your search engine by almost 50% by making the cashing larger, this has both a plus and negative to it.

the plus is the speed of the searches will be much faster for common searches however there will be a trade off as your results might not reflect information indexed only minutes before so you'll have a slight delay in new entry's appearing.

another common mistake is missing stop words, now for those of you that don't know what a stop word it here is a quick explanation.

a stop word is a work that you do not index as a search able word in your search engine this word might appear in your description or name of the site but you do not use it as a searchable work such as a word like "the" can you imagine how many sites that would bring back on the results... nearly if not all sites and as a result would slow your search engine down allot, here are some example search words.

a, about, above, across, after, afterwards, again, against, all, almost, alone, along, already, also, although, always, am, among, amongst, amoungst, amount, an, and, another, any, anyhow, anyone, anything, anyway, anywhere, are, around, as, at, back, be, became, because, become, becomes, becoming, been, before, beforehand, behind, being, below, beside, besides, between, beyond, bill, both, bottom, but, by, call, can, cannot, cant, co, computer, con, could, couldnt, cry, de, describe, detail, do, done, down, due, during, each, eg, eight, either, eleven, else, elsewhere, empty, enough, etc, even, ever, every, everyone, everything, everywhere, except, few, fifteen, fify, fill, find, fire, first, five, for, former, formerly, forty, found, four, from, front, full, further, get, give, go, had, has, hasnt, have, he, hence, her, here, hereafter, hereby, herein, hereupon, hers, herself, him, himself, his, how, however, hundred, i, ie, if, in, inc, indeed, interest, into, is, it, its, itself, keep, last, latter, latterly, least, less, ltd, made, many, may, me, meanwhile, might, mill, mine, more, moreover, most, mostly, move, much, must, my, myself, name, namely, neither, never, nevertheless, next, nine, no, nobody, none, noone, nor, not, nothing, now, nowhere, of, off, often, on, once, one, only, onto, or, other, others, otherwise, our, ours, ourselves, out, over, own, part, per, perhaps, please, put, rather, re, same, see, seem, seemed, seeming, seems, serious, several, she, should, show, side, since, sincere, six, sixty, so, some, somehow, someone, something, sometime, sometimes, somewhere, still, such, system, take, ten, than, that, the, their, them, themselves, then, thence, there, thereafter, thereby, therefore, therein, thereupon, these, they, thick, thin, third, this, those, though, three, through, throughout, thru, thus, to, together, too, top, toward, towards, twelve, twenty, two, un, under, until, up, upon, us, very, via, was, we, well, were, what, whatever, when, whence, whenever, where, whereafter, whereas, whereby, wherein, whereupon, wherever, whether, which, while, whither, who, whoever, whole, whom, whose, why, will, with, within, without, would, yet, you, your, yours, yourself, yourselves.

Now as you can see it list is quite long, now one option is to stop a search for less than three letters this removes the problem with over half the stop words being needed.

Also the design of your search engine can help allot as well, having a master database doing the indexing and many slave databases feed the front end site will greatly better balance the load and keep the response times nice a low.

Tuesday, 8 July 2008

Cisco Object-Group and how to use them

PIX and ASA firewall are very powerful tools but the Access list seem to get longer and long with all the services and hosts and this puts a load on the CPU to check it and the memory to store it so what can we do.

Well Cisco have been very nice to us in giving us a way to take those long ACL (Access Control Lists) and make the shorter. The way they’ve done this is by using objects, now an object represents a set of things such as a service or hosts and/or even a network mask.

Enough of the talk it’s time for an example this is how an old ACL looks commonly.

access-list outsite_in extended permit tcp any host eq www
access-list outsite_in extended permit tcp any host eq www
access-list outsite_in extended permit tcp any host eq www
access-list outsite_in extended deny ip any any

Now as you can see of each server is listed and this can become both messy and unclear after a while as after you’ve listed just six or seven services the list can become very large and almost unreadable when trying to trouble shoot it.

So what they have done is given us the ability to reference the service and host as groups of services/hosts and as a result can put them into just one line.

Object-group network DMZ_Web_Servers
Network-object host
Network-object host
Network-object host
Network-object host

Now we have an object we can use in the access list that will equals all of these host is one short list.

access-list outside_in extended permit tcp any object-group DMZ_Web_Servers eq www

As you can see the access list is much shorter and also because the groups can have logical names it’s much quicker to find the problem list or missing object, now there are more than one kind of object group as well as you can see from the next few examples there is the ICMP object.

object-group icmp-type icmp-allowed
icmp-object echo
icmp-object time-exceeded

There is the protocol object and tagged with is also an example of how you can group the services together so that one object in this case proto_grp_2 also allows proto_grp_1 as its nested inside it.

object-group protocol proto_grp_1
protocol-object udp
protocol-object ipsec

object-group protocol proto_grp_2
protocol-object tcp
group-object proto_grp_1

There is the service object these are port objects really they allow you to use ports to be opened we will see later how we can use these with other groups to open up the port on many items at once.

object-group service eng_service tcp
group-object eng_www_service
port-object eq ftp
port-object range 2000 2005

There is the network object for you common or garden hosts and subnet masks.

object-group network sjc_eng_ftp_servers
network-object host sjc.eng.ftp.servcers
network-object host

And just to show these can all be nested too here is another example of a nested group with network objects.

object-group network sjc_ftp_servers
network-object host sjc.ftp.servers
network-object host
group-object sjc_eng_ftp_servers

In this next example a object that contains both groups is given access to www and the two nested groups are then given there own access as well such as SMTP and FTP.

object-group network host_grp_1
network-object host
network-object host

object-group network host_grp_2
network-object host
network-object host

object-group network all_hosts
group-object host_grp_1
group-object host_grp_2

access-list grp_1 permit tcp object-group host_grp_1 any eq ftp
access-list grp_2 permit tcp object-group host_grp_2 any eq smtp
access-list all permit tcp object-group all_hosts any eq www

As you've seen what would have been 6 lines have be reduced to 3 lines in the end access, but we can do better.

Lastly we a use many objects into just one access list to get all the services and node the permission they need without writing long and complicated access list as an example this grouping enables the access list to be configured in one line instead of 24 lines. Instead with the grouping the access list configuration looks like this.

object-group network remote
network-object host kqk.suu.dri.ixx
network-object host kqk.suu.pyl.gnl

object-group network locals
network-object host
network-object host
network-object host
network-object host

object-group service eng_svc
port-object eq www
port-object eq smtp
port-object range 25000 25100

access-list acl permit tcp object-group remote object-group locals object-group eng_svc

This concludes our lesson for the day but remember the shorter and more efficient the access list the lower the load on the CPU and Memory of the firewall.

Sunday, 6 July 2008

Geographical Computer Networks

Today we are going to talk about geographical computer networks; these are among some of the most complex networks however the underlying structure is the same as any other network.

There are two network models Centralized and Regional sites.

We are going to cover QoS, Proxy’s and Round Robin DNS, most of the information needed to so this I have covered in previous postings.

Centralized has an ease of management and backup but is often slower for users then Regional sites and in addition it puts an overhead on WAN links as all requests are traveling via the WAN and this also leads to a single point of failure if the WAN link stops working.

Redundancy: Server are in a highly stable environment. However they are dependent on WAN links to users.
Availability: Using clusters and Load Balancing 99.99% up time can be guaranteed
Performance: Is good for local site or sites with strong links but many Regional site will perform poorly.

Regional Sites make use of local resources but cost more in administration time because of the complexity of the solution, however but making use of Round Robin DNS to find the closed resource.

Redundancy: Not dependent on local hardware as traffic can be moved to WAN link should local hardware fail. However larger number of servers needed.
Availability: Using nested named resource gives 99.97% uptime with always on resource being online.
Performance: Is good for users as load is balanced between resources both local and remote.

WAN Network optimization
WAN links between sites can become loaded with a large number of unneeded packets most common among these are NetBIOS broadcasts, UDP packets for this reason you should be sure of the traffic that is need and passing over your WAN links

By using WINS and blocking broadcast traffic on your routers you can reduce the UDP packet load however services and applications that use UDP normally such as VoIP and Streaming applications can be protected on the LAN with QoS and TCP trunk being sent over the WAN a good example of this can be found on Cisco site

SQL Geo Network
One of the biggest problems is Microsoft SQL server as yet there is still no geographical solution for data replication the closest at the moment is Microsoft SQL 2005 Merge replication, this can be used to create fault tolerant solution where by each production server hosts a read/write copy of the database. The databases are kept in sync using SQL Server 2005 peer-to-peer replication. Applications connect to the SQL cluster through the production interface using a host name that will distribute and load balance traffic between the nodes.

Redundancy: There is a copy of the database on each of the different servers. If one node of the cluster becomes unavailable, the other nodes automatically pick up the traffic.
Availability: Each server can be taken out of the cluster individually so maintenance can be performed without causing the database to become unavailable.
Performance: Application calls to a database are load balanced between the four nodes of the cluster. Balancing the load should result in better performance during times of increased activity.
Ideally you should have two servers in large Regional Sites so that if there is a fault on one the load is not sent over the WAN to the next nearest site unless needed as the WAN link is primarily used for replication traffic.

Round robin DNS
Is often used for balancing the load of geographically-distributed Web servers. For example, a company has one domain name and three identical web sites residing on three servers with three different IP addresses. When one user accesses the home page it will be sent to the first IP address. The second user who accesses the home page will be sent to the next IP address, and the third user will be sent to the third IP address. In each case, once the IP address is given out, it goes to the end of the list. The fourth user, therefore, will be sent to the first IP address, and so forth.

In windows 2003 Round robin DNS such can be used to matching the request to nearest subnet so that traffic remains with local resources first.
However word of warning this will place more load on your DNS servers so make use you have enough of the to take the load.

Hardware Consolidation
The Consideration is the amount of hardware and cost centralized solutions have fewer server and therefore lower administration costs but the hardware is often costs more as I higher fault tolerances is needed.

The fewer the number of servers the lower the administration cost so in this example I have used only 3 physical servers as the Active Directory server Web and ISA server are all virtual servers while the SQL servers are two physical servers.

Personally I think that you can use one blade centre to build a complete site however since you need a SAN for large storage the cost is more than most want to pay for a solution.

Saturday, 28 June 2008

Load Balancers

There are two ways to have the distribute the load on your network using hardware or software load balancers.

The hardware load balancer some advantages over software load balancers the first is that its independent of the operating system and there for as better throughput than a software one loaded one the second is that can do some checks against the returned code as an example if it sees 404 error page you can configure it to remove that server from the webfarm.

The best hardware load balancer is Cisco Application Control Engine (ACE) module or a Cisco 7600 series router.

Software based load balancers are cheaper to use as they require no Cisco training to use or special hardware, however since they do not have active probes if a web-server or other balanced service starts to return an error message you are unaware of and my issue this error to customers for awhile before you find out.

So from a design point of view to have the highest level of uptime you need two hardware load balancers these can be in active/standby or active/active mode and both to have probes configure to remove unresponsive or error generating servers from the balanced farm.

One last thing to add is if you are using a round robin DNS where you have geographical server farms so that content is provided from near is physical server farm.

Saturday, 21 June 2008

What impact can a poorly designed database have?

I was asked why do ask if you have a risk assessment for each database?

This is one of the standard questions I ask business is do they know what database they have on their systems and are they well maintained?
Because a database can have impact on SQL but just how much?

To answer that question I first have to give you some real world example, Microsoft talks about a performance hit when the database is not designed well or indexed, to translate this in to technical English as Microsoft are very cagey about this subject.

What it really means is the a Itanium 2 server with 4 physical CPU’s 64GB of RAM, for those of you that know you hardware might see where this is going, running SQL 2005 SP2 when loaded with a badly indexed database can go from average CPU load of 20% to 90% on a 3GB database.

Yes you read it right 90% CPU load on a 3GB database, do not underestimate the effect of a database can have on your server. This is why you should have a risk assessment for each database.

So it’s important to look at each database and where it came from as well as having some staging area to test the load before moving it to a production environment.

Surprisingly few companies have a database administrator on their team, most have just the support team and someone that knows how to install SQL.

So when something happened like SQL starts to put a heavy load on the CPU they don’t know where to look. Most if not all of what you need to know is just regular maintained tasks in SQL such as updating statistics, re-index or rebuild broken indexes, shrink the databases, regular transaction log backup
However this can’t be done on all database, some database do need special indexing and maintenance to keep them running.

So make a list of databases that you have and how best to maintain them, and remember to run this jobs at quiet times as re-indexing as an example will impact the performance of the database and depending of the physical resources of the server may impact the server as well and depending on the size of the database can take hours to run.

Don’t forget to have some form of alert if you maintenance job fails, no point in having the job if it doesn't run.

Saturday, 14 June 2008

Distributed File System

Surprisingly few networks have DFS replication in place. This is something you should not overlook as the uses it has on large networks with many sites can save both time and resource.

Here are some of the benefits; you can replicate files to another server giving you some fault tolerance without the expensive of a cluster.
Another plus to this is makes it easy to perform offline work on the server as end users are unaware of the downtime, as well as being able to place profiles on the DFS and have it replicated to all sites so that the end user is always working from the nearest server for quickest response and not loading files over the WAN.

Namespace also let you link many network shares into a single network drive despite them being on separate servers and/or locations.

It can also be used to help you move resource easier by no longer using a server name but a resource name, allowing hardware and locations to be more easily changed without downtime and remapping resources.

Another plus is that you can backup the namespace just once on DFS replicated drives so you don't take multiple backups of the same information allowing you to save money on backup hardware.

But remember there are as always some down sides too.
Remember one is that you will have make sure you have ether enough bandwidth to support the replication or throttle back the DFS replication to not overload your site links.

Fault Tolerant DHCP

This week I heard some stories about DHCP network faults that could have been avoided by better design.

The most common points of failure is that networks have just one DHCP or you need to manually bring the second DHCP server online to restore service back to normal.

The branch offices are also hard to provide best fault tolerance you can but you don’t have the same budget for hardware that you have at head quarters.

And I’ve seen people try to get around this by providing the DHCP only from head office but this still leaves you with the point of failure on the connection between the sites and as we know when the communications do fail they quite often take hours or even days for the provider to get it working again.

So the best ways around this I have found for the moment is to use a hybrid of local and remote DHCP server so that if the link goes down you have a local resource to provide lease and if the local resource fails you still have the remote resource.
By providing the DHCP from more than one location you can get around the single point of failure however this does mean that the scope needs to be larger, Microsoft best practice use a 80/20 rule meaning that 20% of the address are provided by the standby and you need to remove the excluded address manually when the primary DHCP server fails and this takes time, if you are using Cisco routers and switches you can provide a hot standby option with the right hardware.

But most likely you’re using Microsoft windows so your options are a little bit more limited however if you look at running larger subnet on network address translated ranges you can over this, you are maybe wasting half of your IP range this ways however you don't have to wait for some to change over the scope options to get clients connected.

This diagram shows the most fault tolerant model that is currently available using a split DHCP between local and remote and cluster resources at critical site such as head office.

Saturday, 7 June 2008

IIS 401 and 403 Errors

Back my demand IIS errors but unlike most I'm going to tell you where to look to solve your error.

specific cause of the error. These specific error codes are displayed in the browser but are not displayed in the IIS log:

401.1 - Logon failed.
The client gave the wrong username/password (including none at all). This could be from incorrect cached auto-login attempt by the browser, or from a user login dialog from the browser.

Invalid Kerberos configuration - on IIS6, if you have a customized Application Pool Identity AND Integrated Authentication is used AND the web server is in a domain, you will mysteriously get 401.1 unless you configure SETSPN *or* change Integrated Authentication to favor NTLM. See the following URLs on Application Pool Identity, Integrated Authentication in IIS, and Constrained Delegation configuration as well as this URL on additional Kerberos-related troubleshooting for more information

You enabled Anonymous authentication, yet you still get 401.1 for all requests. One common cause is if the configured anonymous user credentials stored in the IIS metabase configuration file is DIFFERENT than the user principle's credentials in reality (i.e. mismatched password). In all cases, the preferred solution is to manually synchronize the username/password of the anonymous user principle in IIS with that of the real user principle. I have seen many amazing variations of this cause, including:

401.2 - Logon failed due to server configuration.
This means the directory ACL you don't have permission to, change the permissions on the directory in IIS manager.

401.3 - Unauthorized due to ACL on resource.
This error message almost the same as the 401.2 but in this case the file its self is what you don't have permission too, check the NTFS permissions on the file at drive level to resolve this one.

401.4 - Authorization failed by filter.
An ISAPI filter loaded denied the request this can be along or short trouble shooting depending on the amount of info out put to the application log in windows

401.5 - Authorization failed by ISAPI/CGI application.
This error indicates that some ISAPI Extension or CGI Web Application sent back a structured 401 response of some sort. The reasons why the CGI/ISAPI are returning such 401 responses are completely arbitrary and uncontrollable by IIS. You will need to determine WHICH CGI/ISAPI is returning the response and obtain support for it.

Monday, 2 June 2008

Store Procedure SP_SDS

SQL has quite a few store procedure one of them is sp_spaceused it’s a useful one to see the space used by the databases however it’s a bit basic

Now luckily there are some much extended version that give much more detail and is useful for a number of reasons, one is because it gives the percentages that are used by the data not just the total size,

So here it is one lovely procedure written by one Mr Ding.

USE master;


@TargetDatabase sysname = NULL, -- NULL: all dbs
@Level varchar(10) = 'Database', -- or "File"
@UpdateUsage bit = 0, -- default no update
@Unit char(2) = 'MB' -- Megabytes, Kilobytes or Gigabytes

** author: Richard Ding
** date: 4/8/2008
** usage: list db size AND path w/o SUMmary
** test code: sp_SDS -- default behavior
** sp_SDS 'maAster'
** sp_SDS NULL, NULL, 0
** sp_SDS NULL, 'file', 1, 'GB'
** sp_SDS 'Test_snapshot', 'Database', 1
** sp_SDS 'Test', 'File', 0, 'kb'
** sp_SDS 'pfaids', 'Database', 0, 'gb'
** sp_SDS 'tempdb', NULL, 1, 'kb'


IF @TargetDatabase IS NOT NULL AND DB_ID(@TargetDatabase) IS NULL
RAISERROR(15010, -1, -1, @TargetDatabase);

IF OBJECT_ID('tempdb.dbo.##Tbl_CombinedInfo', 'U') IS NOT NULL
DROP TABLE dbo.##Tbl_CombinedInfo;

IF OBJECT_ID('tempdb.dbo.##Tbl_DbFileStats', 'U') IS NOT NULL
DROP TABLE dbo.##Tbl_DbFileStats;

IF OBJECT_ID('tempdb.dbo.##Tbl_ValidDbs', 'U') IS NOT NULL
DROP TABLE dbo.##Tbl_ValidDbs;

IF OBJECT_ID('tempdb.dbo.##Tbl_Logs', 'U') IS NOT NULL
DROP TABLE dbo.##Tbl_Logs;

CREATE TABLE dbo.##Tbl_CombinedInfo (
DatabaseName sysname NULL,
[type] VARCHAR(10) NULL,
LogicalName sysname NULL,
T dec(10, 2) NULL,
U dec(10, 2) NULL,
[U(%)] dec(5, 2) NULL,
F dec(10, 2) NULL,
[F(%)] dec(5, 2) NULL,
PhysicalName sysname NULL );

CREATE TABLE dbo.##Tbl_DbFileStats (
Id int identity,
DatabaseName sysname NULL,
FileId int NULL,
FileGroup int NULL,
TotalExtents bigint NULL,
UsedExtents bigint NULL,
Name sysname NULL,
FileName varchar(255) NULL );

CREATE TABLE dbo.##Tbl_ValidDbs (
Id int identity,
Dbname sysname NULL );

CREATE TABLE dbo.##Tbl_Logs (
DatabaseName sysname NULL,
LogSize dec (10, 2) NULL,
LogSpaceUsedPercent dec (5, 2) NULL,
Status int NULL );

DECLARE @Ver varchar(10),
@DatabaseName sysname,
@Ident_last int,
@String varchar(2000),
@BaseString varchar(2000);

SELECT @DatabaseName = '',
@Ident_last = 0,
@String = '',
@Ver = CASE WHEN @@VERSION LIKE '%9.0%' THEN 'SQL 2005'
WHEN @@VERSION LIKE '%10.0%' THEN 'SQL 2008'

SELECT @BaseString =
CASE WHEN @Ver = 'SQL 2000' THEN 'CASE WHEN status & 0x40 = 0x40 THEN ''Log'' ELSE ''Data'' END'
ELSE ' CASE type WHEN 0 THEN ''Data'' WHEN 1 THEN ''Log'' WHEN 4 THEN ''Full-text'' ELSE ''reserved'' END' END +
', name, ' +
CASE WHEN @Ver = 'SQL 2000' THEN 'filename' ELSE 'physical_name' END +
', size*8.0/1024.0 FROM ' +
CASE WHEN @Ver = 'SQL 2000' THEN 'sysfiles' ELSE 'sys.database_files' END +
+ CASE WHEN @Ver = 'SQL 2000' THEN ' HAS_DBACCESS(DB_NAME()) = 1' ELSE 'state_desc = ''ONLINE''' END + '';

SELECT @String = 'INSERT INTO dbo.##Tbl_ValidDbs SELECT name FROM ' +
CASE WHEN @Ver = 'SQL 2000' THEN 'master.dbo.sysdatabases'
WHEN @Ver IN ('SQL 2005', 'SQL 2008') THEN 'master.sys.databases'
EXEC (@String);


-- For data part
IF @TargetDatabase IS NOT NULL
SELECT @DatabaseName = @TargetDatabase;
IF @UpdateUsage <> 0 AND DATABASEPROPERTYEX (@DatabaseName,'Status') = 'ONLINE'
AND DATABASEPROPERTYEX (@DatabaseName, 'Updateability') <> 'READ_ONLY'
SELECT @String = 'USE [' + @DatabaseName + '] DBCC UPDATEUSAGE (0)';
PRINT '*** ' + @String + ' *** ';
EXEC (@String);

SELECT @String = 'INSERT INTO dbo.##Tbl_CombinedInfo (DatabaseName, type, LogicalName, PhysicalName, T) ' + @BaseString;

INSERT INTO dbo.##Tbl_DbFileStats (FileId, FileGroup, TotalExtents, UsedExtents, Name, FileName)
EXEC ('USE [' + @DatabaseName + '] ' + @String);

UPDATE dbo.##Tbl_DbFileStats SET DatabaseName = @DatabaseName;
WHILE 1 = 1
SELECT TOP 1 @DatabaseName = Dbname FROM dbo.##Tbl_ValidDbs WHERE Dbname > @DatabaseName ORDER BY Dbname ASC;
IF @UpdateUsage <> 0 AND DATABASEPROPERTYEX (@DatabaseName, 'Status') = 'ONLINE'
AND DATABASEPROPERTYEX (@DatabaseName, 'Updateability') <> 'READ_ONLY'
SELECT @String = 'DBCC UPDATEUSAGE (''' + @DatabaseName + ''') ';
PRINT '*** ' + @String + '*** ';
EXEC (@String);

SELECT @Ident_last = ISNULL(MAX(Id), 0) FROM dbo.##Tbl_DbFileStats;

SELECT @String = 'INSERT INTO dbo.##Tbl_CombinedInfo (DatabaseName, type, LogicalName, PhysicalName, T) ' + @BaseString;

EXEC ('USE [' + @DatabaseName + '] ' + @String);

INSERT INTO dbo.##Tbl_DbFileStats (FileId, FileGroup, TotalExtents, UsedExtents, Name, FileName)

UPDATE dbo.##Tbl_DbFileStats SET DatabaseName = @DatabaseName WHERE Id BETWEEN @Ident_last + 1 AND @@IDENTITY;

-- set used size for data files, do not change total obtained from sys.database_files as it has for log files
UPDATE dbo.##Tbl_CombinedInfo
SET U = s.UsedExtents*8*8/1024.0
FROM dbo.##Tbl_CombinedInfo t JOIN dbo.##Tbl_DbFileStats s
ON t.LogicalName = s.Name AND s.DatabaseName = t.DatabaseName;

-- set used size and % values for log files:
UPDATE dbo.##Tbl_CombinedInfo
SET [U(%)] = LogSpaceUsedPercent,
U = T * LogSpaceUsedPercent/100.0
FROM dbo.##Tbl_CombinedInfo t JOIN dbo.##Tbl_Logs l
ON l.DatabaseName = t.DatabaseName
WHERE t.type = 'Log';

UPDATE dbo.##Tbl_CombinedInfo SET F = T - U, [U(%)] = U*100.0/T;

UPDATE dbo.##Tbl_CombinedInfo SET [F(%)] = F*100.0/T;

IF @Unit = 'KB'
UPDATE dbo.##Tbl_CombinedInfo
SET T = T * 1024, U = U * 1024, F = F * 1024;

IF @Unit = 'GB'
UPDATE dbo.##Tbl_CombinedInfo
SET T = T / 1024, U = U / 1024, F = F / 1024;

SELECT DatabaseName AS 'Database',
type AS 'Type',
T AS 'Total',
U AS 'Used',
[U(%)] AS 'Used (%)',
F AS 'Free',
[F(%)] AS 'Free (%)',
FROM dbo.##Tbl_CombinedInfo
WHERE DatabaseName LIKE ISNULL(@TargetDatabase, '%')
ORDER BY DatabaseName ASC, type ASC;

SUM (T) AS 'TOTAL', SUM (U) AS 'USED', SUM (F) AS 'FREE' FROM dbo.##Tbl_CombinedInfo;

DatabaseName sysname NULL,
TOTAL dec (10, 2),
[=] char(1),
used dec (10, 2),
[used (%)] dec (5, 2),
[+] char(1),
free dec (10, 2),
[free (%)] dec (5, 2),
[==] char(2),
Data dec (10, 2),
Data_Used dec (10, 2),
[Data_Used (%)] dec (5, 2),
Data_Free dec (10, 2),
[Data_Free (%)] dec (5, 2),
[++] char(2),
Log dec (10, 2),
Log_Used dec (10, 2),
[Log_Used (%)] dec (5, 2),
Log_Free dec (10, 2),
[Log_Free (%)] dec (5, 2) );

SELECT x.DatabaseName,
x.Data + y.Log AS 'TOTAL',
'=' AS '=',
x.Data_Used + y.Log_Used AS 'U',
(x.Data_Used + y.Log_Used)*100.0 / (x.Data + y.Log) AS 'U(%)',
'+' AS '+',
x.Data_Free + y.Log_Free AS 'F',
(x.Data_Free + y.Log_Free)*100.0 / (x.Data + y.Log) AS 'F(%)',
'==' AS '==',
x.Data_Used*100/x.Data AS 'D_U(%)',
x.Data_Free*100/x.Data AS 'D_F(%)',
'++' AS '++',
y.Log_Used*100/y.Log AS 'L_U(%)',
y.Log_Free*100/y.Log AS 'L_F(%)'
( SELECT d.DatabaseName,
SUM(d.T) AS 'Data',
SUM(d.U) AS 'Data_Used',
SUM(d.F) AS 'Data_Free'
FROM dbo.##Tbl_CombinedInfo d WHERE d.type = 'Data' GROUP BY d.DatabaseName ) AS x
( SELECT l.DatabaseName,
SUM(l.T) AS 'Log',
SUM(l.U) AS 'Log_Used',
SUM(l.F) AS 'Log_Free'
FROM dbo.##Tbl_CombinedInfo l WHERE l.type = 'Log' GROUP BY l.DatabaseName ) AS y
ON x.DatabaseName = y.DatabaseName;

IF @Unit = 'KB'
UPDATE @Tbl_Final SET TOTAL = TOTAL * 1024,
used = used * 1024,
free = free * 1024,
Data = Data * 1024,
Data_Used = Data_Used * 1024,
Data_Free = Data_Free * 1024,
Log = Log * 1024,
Log_Used = Log_Used * 1024,
Log_Free = Log_Free * 1024;

IF @Unit = 'GB'
UPDATE @Tbl_Final SET TOTAL = TOTAL / 1024,
used = used / 1024,
free = free / 1024,
Data = Data / 1024,
Data_Used = Data_Used / 1024,
Data_Free = Data_Free / 1024,
Log = Log / 1024,
Log_Used = Log_Used / 1024,
Log_Free = Log_Free / 1024;

DECLARE @GrantTotal dec(11, 2);
SELECT @GrantTotal = SUM(TOTAL) FROM @Tbl_Final;

CONVERT(dec(10, 2), TOTAL*100.0/@GrantTotal) AS 'WEIGHT (%)',
DatabaseName AS 'DATABASE',
CONVERT(VARCHAR(12), used) + ' (' + CONVERT(VARCHAR(12), [used (%)]) + ' %)' AS 'USED (%)',
CONVERT(VARCHAR(12), free) + ' (' + CONVERT(VARCHAR(12), [free (%)]) + ' %)' AS 'FREE (%)',
CONVERT(VARCHAR(12), Data) + ' (' + CONVERT(VARCHAR(12), Data_Used) + ', ' +
CONVERT(VARCHAR(12), [Data_Used (%)]) + '%)' AS 'DATA (used, %)',
CONVERT(VARCHAR(12), Log) + ' (' + CONVERT(VARCHAR(12), Log_Used) + ', ' +
CONVERT(VARCHAR(12), [Log_Used (%)]) + '%)' AS 'LOG (used, %)'
FROM @Tbl_Final
WHERE DatabaseName LIKE ISNULL(@TargetDatabase, '%')
ORDER BY DatabaseName ASC;

IF @TargetDatabase IS NULL
SUM (used) AS 'USED',
SUM (free) AS 'FREE',
SUM (Data) AS 'DATA',
SUM (Log) AS 'LOG'
FROM @Tbl_Final;



This will show you far more detail on usage and will run on all the database at ones while the standard sp_spaceused will have to run many time.

I just wanted to share this with you as its saved me lots of time over the past month.

Wednesday, 28 May 2008

IP Spoofing

Today’s subject is IP Spoofing and before you ask no I’m not going to tell you how it’s done, you’ll just have to use google to find that out for yourself.

So here it is RPF (reverse-path forwarding) the Unicast RPF feature helps to mitigate problems that are caused by the introduction of malformed or forged (spoofed) IP source addresses into a network by discarding IP packets that lack a verifiable IP source address.

These will reduce a number of attacks methods rely on falsifying the traffic source to create a denial-of-service (DoS) When enabled, the device checks the source address of the packet against the interface through which the packet arrived. This will help defend your network from spoof packets that are causing problems.

Note: Unicast RPF should not be used on interfaces that are internal to the network.

Verifying the source address of IP traffic against routing rules reduces the possibility that an attacker can spoof the source of an attack and packets are dropped if the device determines, by verifying routing tables, there is no feasible path through the interface for the source address.

Enabling reverse-path verification in environments with asymmetric routes can adversely affect network traffic so be careful about the environment you use this in but for some 80% of you this should not be an issue.

So here is the commands.

On a Cisco ASA

interface {interface_name}
ip verify reverse-path interface {interface_name}

On a Cisco Router the command is

ip cef distributed
interface {interface_name}
ip verify unicast reverse-path

But there are some other points to the router that you need to know.

Enables CEF or distributed CEF on the router. Distributed CEF is required for routers that use a Route Switch Processor (RSP) and Versatile Interface Processor (VIP), which includes Unicast RPF.

You might want to disable CEF or distributed CEF (dCEF) on a particular interface if that interface is configured with a feature that CEF or dCEF does not support.

In this case, you would enable CEF globally, but disable CEF on a specific interface using the

interface {interface_name}
no ip route-cache cef

which enables all but that specific interface to use express forwarding. If you have disabled CEF or dCEF operation on an interface and want to reenable it, you can do so by using the

interface {interface_name}
ip route-cache cef

command in interface configuration mode.

Also use access lists with RPF to log or drop packets using the ip verify unicast reverse-path {list number}

In this next example the logging option is turned on for the access list entry and dropped packets are counted per interface and globally. Packets with a source address of arriving at interface S0/1 are forwarded because of the permit statement in access list 197.

Access list information about dropped or suppressed packets is logged (logging option turned on for the access list entry) to the log server.

interface s0/1
ip verify unicast reverse-path 197

access-list 197 deny ip any log-input
access-list 197 permit ip any log-input
access-list 197 deny ip any log-input
access-list 197 permit ip any log-input
access-list 197 deny ip host any log

But the access list is all up to you as to how you configure you RPF as you'll know more about the packets your expecting on your network but I would suggest you always keep the logging option on at first till you happy with your setup.

Also keep an eye on the CPU load of the router and ASA as these option can use the CPU if you have a fast connection with lots of traffic going over it.

Sunday, 25 May 2008

Cisco Firewalls NTP

I had a comment this week about the NTP commands I posted didn't work some of you, after a quick investigation I found the problem is your talking about Cisco ASA (Adaptive Security Appliance) both these and PIX (Private Internet Exchange) do not have the same commands as Cisco routers, and previously I was talking about Cisco Routers.

So I'll try my best to keep my postings clear about what Cisco Appliance I'm talking about, so to recap this posting is about ASA and NTP.

In a simple model you could just use the IP of you NTP server and the interface that its on.

ntp server {ntp-server_ip_address} [source interface_name]

This would be enough in most networks where you are talking the time from the local NTP ether linux, unix or windows server.

However in large enterprise or where the time server is external I strongly recommend you use md5 encryption, other wise people can send time packets to the device that will confuse the time on the device and make tracking a real attack very hard.

ntp authenticate
ntp trusted-key {ntp_key_id}
ntp authentication-key {ntp_key_id} md5 {ntp_key}
ntp server {ntp-server_ip_address}{key ntp_key_id} [source interface_name]

This might sound like allot of work for one service but remember every service that is not locked down is a threat to your network, as it can and will be used against you, NTP might not sound very dangerous but it very useful for your attacker to be able to confuse you as to when the attack really took place.

Tuesday, 20 May 2008

PC Imaging Vs PC Backups Vs Remote Operating System Deployment

The main ways to keep your users workstation and notebooks running is never liked by them but as I say read network usage policy.

You know the drill Lockdown the install functions and removes any and every thing that is not needed. Control what can be seen on the internet and if they can't find a business reason for it don't allow it. Don’t let them connect devices to the PC or network without express permission from the network administrator.

However this keeps them from crashing it by installing disruptive programs, but what about in the advent of hardware failure. Now on the market there are three main ways to get around hardware failure at the moment the most popular one at the moment is imaging.

Imaging creates a file that contains the exact copy of the data on that drive at that time. So main things to note are that you will need to have enough space to have these images on the network as if they are on the local PC and the drive fails you’ve lost everything, so that might be as much as 5Gb per workstation and notebook and that after compression plus incremental .

Second point is that in a domain computer accounts passwords change after 90 days so if the image is older than 90 days is of no good if its imaged while connected to the domain, however Symantec have a option with their ghost to disjoin the PC before the backup and rejoin after its finished.

Third point is that the restore has to be done to similar hardware, this is where Acronis has the upper hand at the moment with their universal restore making it easier to restore to other hardware, note I said easier not easy there are still some hardware that you will have lots of problems with due to drivers.

Fourth point is that when you have allot of hardware that is the same you can build on imagine and just deploy it to all of the PC’s by CD or Network speeding up large builds.

PC Backup
PC backup use the same basic idea as imaging but backing up to tape or SAN, this again means that you need enough space to back up all your PC’s to tape or SAN so this again might leave you with a space issue.

Second point is that you have to restore to similar hardware. But there are some tools out there to help you with this depending on your backup solution and sadly there are just to many for me to cover them so I won't.

Third point is Backups don’t really help you deploy new PC’s to the network, so you are still left with having to use one of the other two if you want to speed up your installs.

Fourth point is remote backups notebooks might not be onsite when you want to back them up, now some backup software will work over a VPN link while some others will not handle this speed keep this in mind when picking a backup, while imaging can be done to a local hard drive and copied to the network when available.

Remote Operating System Deployment
Last but not least is deploying the Operating System from a network location, so if something goes wrong with the hardware you just install a fresh build on new hardware or the repaired hardware, and these install can be custom to install all your applications. Again space on the network to hold the installs is needed but less than the imaging and backup options, as an example a custom vista install is about 7GB depending on the drivers you place in the package it might also handle all you hardware from one build.

Second point is that you can if users store information on their desktop and you don’t have roaming profiles they will lose it, when a fresh install is done so remember this as this is the drawback to doing fresh builds every time. If you are using roaming profiles you'll notice that because you have profiles and not full content of the hard drive you'll save more than 20% because you don't have the Operating System files.

Third point is that you will have to update this build as often as your hardware changes and if you are buying only a few PC’s at a time different hardware vendors you’ll end up have to create or more builds to cover all your hardware, this can be very time consuming.

Now the over view
So I know what your dying to ask, what one would I use. Well there is no good answer to this as depending on the network and SLA for getting the PC running again the best one might change.

Personally I would say that Remote Operating System Deployment is a better choice if you've roaming profiles and a secure domain, but if you have local profiles only then you have to look at Backup or Imaging and see what one works better in your environment. And lastly the cost of each as none of these are free but all of them cost affective as its better than spending time building PC.

In short a mix of these should be used and sometimes even all three, however word of warning don't make your life harder than it has to be find one product in each area your happy with and use it don't mix and match, remember your doing this to free up your time not spend it reading product manuals.

Yes you hear the M word "Manuals" I know that we don't like reading them but with this kind of thing its best to find all the options to the product before deploying it.

I'll leave you with this as a finale example Backup for workstation and server, remote operating system for new builds and imaging for notebook.

SQL Transaction Logs

SQL Database transaction logs can become quite large to put it politely if you are running full recovery model on the database but to be honest if not monitored closely, the other option is to put it in simple I don’t know of a company that would use simple as who would say that their database is not so important.

So how do you keep the transaction log from getting massive? Well the simple fact is that you need to backup the transaction log, but this just clears the inactive transaction it doesn’t shrink it so the question is what are you looking forwhen you say it’s getting to big, because you have two kinds of space to think of.

Space on disk (the physical space occupied by the transaction log)
and you have white space (the white space inside the transaction log)

White Space is what happens when you back up the transaction log is clears content. as its been backed up but it does not shrink the transaction log so the physical disk space doesn’t change however you can now fill the white space within the transaction log with new information and changes.

Depending on the way you grow your databases by auto grow or manually adding space as you go along as to how this will affect you.
If you manually grow the transaction log then you will need to make sure you backups happen frequently enough that you are not having to grow the transaction log every few days or hours.

Good rule of thumb is to backup the transaction log files regularly to delete the inactive transactions in your transaction log.
Design the transactions to be small.
Make sure that no uncommitted transactions continue to run for an indefinite time. Schedule the Update Statistics option to occur daily.
To defragment the indexes to benefit the workload performance in your production environment, use the DBCC INDEXDEFRAG Transact-SQL statement instead of the DBCC DBREINDEX Transact-SQL statement. If you run the DBCC DBREINDEX statement, the transaction log may expand significantly when your SQL Server database is in Full recovery mode. Additionally, the DBCC INDEXDEGRAG statement does not hold the locks for a long time, unlike the DBCC DBREINDEX statement.

Note: In Microsoft SQL they are planing to change the command in later versions with ALTER INDEX statments.

The simple recovery model allows you to recover data only to the most recent full database or differential backup. Transaction log backups are not available because the contents of the transaction log are truncated each time a checkpoint is issued for the database.

The full recovery model uses database backups and transaction log backups to provide complete protection against failure. Along with being able to restore a full or differential backup, you can recover the database to the point of failure or to a specific point in time. All operations, including bulk operations such as SELECT INTO, CREATE INDEX and bulk-loading data, are fully logged and recoverable.

The bulk-logged recovery model provides protection against failure combined with the best performance. In order to get better performance, the following operations are minimally logged and not fully recoverable: SELECT INTO, bulk-load operations, CREATE INDEX as well as text and image operations. Under the bulk-logged recovery model, a damaged data file can result in having to redo work manually based on the operations that are not fully logged. In addition, the bulk-logged recovery model only allows the database to be recovered to the end of a transaction log backup when the log backup contains bulk changes.

So once again, based on the information above it looks like the Full Recovery model is the way to go. Given the flexibility of the full recovery model, why would you ever select any other model? The following factors will help you determine when another model could work for you:

Select Simple if:
Your data is not critical.
Losing all transactions since the last full or differential backup is not an issue.
Data is derived from other data sources and is easily recreated.
Data is static and does not change often.
Space is limited to log transactions. (This may be a short-term reason, but not a good long-term reason.)

Select Bulk-Logged if:
Data is critical, but logging large data loads bogs down the system.
Most bulk operations are done off hours and do not interfere with normal transaction processing.
You need to be able to recover to a point in time.

Select Full if:
Data is critical and no data can be lost.
You always need the ability to do a point-in-time recovery.
Bulk-logged activities are intermixed with normal transaction processing.
You are using replication and need the ability to resynchronize all databases involved in replication to a specific point in time.

Note: will full recovery you will need to backup the transaction log frequently to prevent it growing out of control, if you don’t do this you will need to manually shrink it with TRUNCATE_ONLY command.

BACKUP LOG database WITH TRUNCATE_ONLY not recommended!!
After backing up the log using either NO_LOG or TRUNCATE_ONLY, the changes recorded in the log are not recoverable. For recovery purposes, immediately execute BACKUP DATABASE. So you will be unable to restore to point in time other wise.

So to make it clear after all this you should do the following.
Best practice and way to prevent having to run the NO_LOG or TRUNCATE_ONLY commands is to put db in recovery model best for your database.

Do regular log backups to keep the transaction log a reasonable size, and run a maintenance plain that shrinks the log files and database on your server regularly, as well as reindexing and other fine tuning tasks.