Wednesday, 4 November 2009

SQL 2008 on VMware

Microsoft SQL on VMware is not as simple if you want to get the best of it because there are a few more things to take into account when building the server.

So after looking at this I decided to put some other guide lines down for virtual server running Microsoft SQL, so here they are.

1) Priority Boost,By default, the priority boost setting is 0, which causes SQL Server to run at a normal priority whether you run SQL Server on a uniprocessor computer or on a symmetric multiprocessor (SMP) computer. I recommend you change this to 1 and this will cause SQL Server process runs at a high priority.

2) Enable large-page, Trace flag 834: Use Microsoft Windows large-page allocations for the buffer pool.

Trace flag 834 applies only to 64-bit versions of SQL Server. You must have the Lock pages in memory user right to turn on trace flag 834. You can turn on trace flag 834 only at startup.

Trace flag 834 causes SQL Server to use Microsoft Windows large-page allocations for the memory that is allocated for the buffer pool. The page size varies depending on the hardware platform, but the page size may be from 2 MB to 16 MB. Large pages are allocated at startup and are kept throughout the lifetime of the process. Trace flag 834 improves performance by increasing the efficiency of the translation look-aside buffer (TLB) in the CPU.

3) Disk Alignment, should be set 1024k and NTFS Allocation Unit Size.

When formatting the partition that will be used for SQL Server data files, it is recommended that you use a 64-KB allocation unit size for data, logs, and tempdb.

4) If more than 3 GB is desired, use 64-bit versions of the OS and application.

VMware recommend this for all Microsoft SQL server versions.

5) change virtual disk heap, VMFS3 max heap size from 16MB to 64MB

This applies to older version of ESX server mostly.

And there you have it the 5 points to tuning you Microsoft SQL server on VMware, by the way the one thing I left out was regarding how to setup your SAN storage for best I/O, there was a reason for this related to the size of the posting needed to explain how to setup the storage best and because of the number of type of SAN devices on the market.

Monday, 26 October 2009

SQL 2005 tunning

Almost any administrator can install Microsoft SQL Server after all doesn't take a great deal of knowledge to click next, but how many of us really have optimized systems?

Here are some useful pointers.

Do you have optimised drives for SQL currently the best setup is 1024K partition alignment this formatting must be done from diskpart in windows version 2000 and 2003 by default window 2008 uses 1024K alignment, you should have ideally have a minimum of three drives for your databases
one dive dedicated to tempdb
one or more drives dedicated to .mdf and .ndf data files
one or more drives dedicated to .ldf log files
and these drives should ideally be on two or more RAID controllers.

After the SQL server has been installed the first thing you should do is correct the default database locations so that new database are created on the correct drives.

USE [master]
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer',


REG_SZ, N'E:\Microsoft SQL Server\MSSQL.1\MSSQL\Data’
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer',


REG_SZ, N'E:\Microsoft SQL Server\MSSQL.1\MSSQL\Data’

Move the tempdb database this will have been located in the same directory as the system databases and this should be on its own drive this is done by running SQL query then restarting the SQL service after.

USE [master]
(NAME = tempdev, FILENAME = 'D:\Microsoft SQL Server\MSSQL.1\MSSQL\Data\tempdb.mdf')
(NAME = templog, FILENAME = 'D:\Microsoft SQL Server\MSSQL.1\MSSQL\Data\templog.ldf')

Move the Model database stop sql server and start the instance from command line using
NET START MSSQLSERVER /c /m /T3608 then run the following SQL query to detach the model database.

USE [master]


sp_detach_db 'model'


Move the Model.mdf and Modellog.ldf files from the origanle location C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data folder to the new location example:
E:\Microsoft SQL Server\MSSQL.1\MSSQL\Data.
Reattach the model database by using the following commands:

use [master]


sp_attach_db 'model',

'E:\Microsoft SQL Server\MSSQL.1\MSSQL\Data\model.mdf',

'E:\Microsoft SQL Server\MSSQL.1\MSSQL\Data\modellog.ldf'


Now stop the SQL server and start it normally from windows services.

Note: Make sure the directory structre exists before moving the Database

For best performance tempdb should have one data file per physical CPU assigned to SQL server, due core counts as two CPU however hyperthread does not, to find the correct number you can use the following script.

strComputer = "."

Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\CIMV2")

Set colCSes = objWMIService.ExecQuery("SELECT * FROM Win32_ComputerSystem")

For Each objCS In colCSes

WScript.Echo "Computer Name: " & objCS.Name

WScript.Echo "System Type: " & objCS.SystemType

WScript.Echo "Number Of Physical Processors: " & objCS.NumberOfProcessors



Set colItems = objWMIService.ExecQuery("SELECT * FROM Win32_Processor")

For Each objItem in colItems


Wscript.Echo "==================="

Wscript.Echo "== Processor " & i & " =="

Wscript.Echo "==================="

Wscript.Echo "Processor: " & objItem.Name

Wscript.Echo "NumberOfCores: " & objItem.NumberOfCores

Wscript.Echo "NumberOfLogicalProcessors: " & objItem.NumberOfLogicalProcessors



The same can be done on Windows 2008 by WMIC

wmic cpu get NumberOfCores, NumberOfLogicalProcessors

For best performance you need to grant the SQL service account rights to “lock pages in memory” and “Perform volume maintenance tasks" this can be done by editing the local policy on the server using gpedit.msc or by domain policy assigned to these servers.

Note: Windows server restart is needed for policy to take affect.

Changing the SQL startup parameters to opermise the system here are some example ones to use on x64 servers parameters: -c -E -T834 -T2301

-E Increase the number of consecutive extents allocated per file to 4

-T2301 trace flag to enable more accurate query run-time behavior modeling in the SQL Server query optimizer typically only needed for large data set decision support processing.

- T834 On systems with 8GB or more, this traceflag causes the buffer pool to use large pages. These are allocated at startup and are kept throughout the lifetime of the process. This trace flag can only be set on 64-bit

Note: If the system is not x64 and has less than 8GB of RAM do not use these switches

This concludes my notes for Microsoft SQL 2005 installation, please remember that these are some of the things that I have found to be of some use and I just didn't want to go into all of them in detail because I've not enough time in this posting.

Monday, 14 September 2009

Legal Responsability

Where does accountability lay for security both virtual and physical within your company.

We all know the basics like servers are responsibility of your IT staff, but this not the only part, to be honest there are around three main areas of responsibility.

Corporate responsibility, this is the mostly the legal parts of the business we will cover this shortly.
Then we have the Application responsibility this mostly patching and other such vendor related issues, this still fits into your IT department and lastly User responsibility.

With Corporate responsibility is a fuzzy area for most IT departments as they have never be trained in legal profession, so start to think of all the legal aspects of the IT. First think about what happens with all the software you have, and is it really licensed correctly? this can cost a company thousands if there is an audit call and you have missing licenses.

Second have you ever dismissed someone that put a USB key or other removable device into the network that caused an outage ? did you explain to the before this that they shouldn't do it... in black and white? because you can't just dismiss someone for breaking the rules if you haven't first shown them the rules, this come under desktop usage policy. otherwise the company leaves its self open to a counter case for unfair dismissal.

In the case of Application vendors the responsibility to patch security holes is almost voluntary, and even with those that are providing the patches it can often be later to be released. However this does not discount you from following best practice on your network, in fact despite the large number of security hole in software most can be overcome by using DMZ and Layer 3 and 4 switches to prevent undesired traffic. Remember that if you are going to court because a hole is a vendors software cost you millions you have to first make sure you where not leaving the security gate open first.

Lastly the rouge user, these can be at any level within the company from data entry to CEO and can represent a real risk because of the date loss and business impact of that loss.

If alarm bells aren’t already ringing in your head it means ether you’ve covered these points or you a foolish soul indeed.

Here is a quick check list of thing you should have.

1) Clear desktop usage policy, ideally this should be attached to the employee hand book so all employees read it, and should be reminded by a logon banner of some kind. (Remember if it’s not written down you can’t tell them off for it.)

2) Applications and operating systems are not built proof however they can be hardened, enable the firewalls on the operating system, use layer 3 and 4 switches to control unwanted traffic and use DMZ’s for critical system not just public facing systems such as web and email server. (I know it’s a lot of effort I know but it’s all worth it, and the reward of having a working network when others are down is great feeling.)

3) Say no to local data… storing data on laptops and or other removable devices is a security risk at best a foolhardy most of the time. (yes a laptop is a removable device; you take it from the company don’t you?) Try to use terminal services where possible to avoid risk of data leaving the company from theft, encrypt and password protect backup media. If users need to get their email on the go use give them a netbook/notebook as mobile device and other device do not have encryption and if stolen the data/inbox they are connected to has been compromised. There have been cases from banks to military where this has happened no one is above suspicion. It could be the lonely sales guy or the CEO that has his laptop stolen so make sure the data is no on the laptop, centralized applications, this will also give greater control over how the information is seen and prevent office documents containing corporate data leaving the enterprise network.

The last thing and this is for your own protection have a formal risk acceptance form for Managers to sign, this is for example when they don't want to do as you want and what you know is in the best interests of security, write down the risks and get them to sign it and don't do anything till they sign it because other wise it's your job that is on the line.

Reblog this post [with Zemanta]

Tuesday, 7 July 2009

Using Ubuntu Syslog with Cisco

Today I decided to show you how to log your cisco to a syslog server on ubuntu.

Before we begin, backup the files as you never know when you'll change something you didn't mean to
cp /etc/syslog.conf /etc/syslog.conf.ididamistake

sudo /etc/syslog.conf
Add the following lines:
#router logging
local6.debug /var/log/cisco.log

This means send all messages from facility local6, with a priority of debug or greater, to /var/log/cisco.log

if this is not enough for you can always use local6.* this can be overkill but very useful

if you haven't already then you'll need to create the logfile
sudo touch /var/log/cisco.log

you'll need to enable syslog to accept messages from remote machines by editing
sudo nano /etc/default/syslogd

to add the -r option:

Now restart the syslog daemon.
sudo /etc/rc2.d/S10sysklogd restart

you can now create a test message into the syslog to see if it's logging
logger -p local6.debug "is this working?"

cat /var/log/cisco.log, you should see the line above.

Now, we have a little problem the message as also been posted to other log files in /etc/syslog.conf (such as /var/log/syslog, /var/log/messages, and /var/log/debug).
We don’t want the messages from the router mixed in with the system messages.
Edit /etc/syslog.conf to include exceptions for local6 anywhere we have an *.[whatever], like so:

auth,authpriv.none -/var/log/syslog

Restart the syslog daemon again.

Test that your config is working as expected for each in debug info notice warn err crit alert emerg panic
so run do
logger -p local6.debug "is this working?"
logger -p local6.warn "is this working?"
logger -p "is this working?"
logger -p local6.err "is this working?"
these should only go to cisco.log

Check /var/log/cisco.log, /var/log/syslog, /var/log/debug, and /var/log/messages - messages should only be in cisco.log.

Now that your syslog server is setup you need to configure the router to send the messages to the server.

Configure your router to send messages to the log host couldn't be easier.
config t
logging [ip address of your ubuntu box]
logging facility local6
logging history [severity]
logging on

Your version of IOS may require different commands. Have fun with that.

Logging severity level
emergencies System is unusable (severity=0)
alerts Immediate action needed (severity=1)
critical Critical conditions (severity=2)
errors Error conditions (severity=3)
warnings Warning conditions (severity=4)
notifications Normal but significant conditions (severity=5)
informational Informational messages (severity=6)
debugging Debugging messages (severity=7)

Normally I stick with informational (sev=6) debugging can create too much info and unless you have an issue with a router I wouldn't use it.

Compare the logging buffer on your router (”sh logging”) with the file on your log server; messages, since you made the change, should also be going to the server.
If not, make sure you can reach the log server from the router, and that port 514 isn’t blocked anywhere, otherwise, this won't work.

Now we don't want the log file to get too big so we'll setup a log rotation
Add this to sudo nano /etc/logrotate.conf below the “system-specific logs may be configured here”

/var/log/cisco.log {
rotate 7
size 5M

Remember you many need to change this depending on the number of messages you get, you can expand the size of the file as well and if you have access-list that have the logging option on the file can get quite large.
If you'd like to lean more about the logging options here is a useful link

Wednesday, 1 July 2009

Apache Security

As web servers go Apache is one I like allot, its stable and very light foot print is great. After install its ready to run no big mods needed, however on this that does need to be addressed is security of the account, it runs under.

I'm noticed that a number of people do not setup any user account for Apache leaving it to run under services, this can open up services to web hackers that can then read the list of running services and use this to find other exploits of the system.

Create an account with a name such as: apache, which runs the web server software. Since this account will never be used to log into for shell access, we do not need to create the normal user account login files

On Ubuntu this is done like so sudo groupadd apache && useradd apache -g apache -d /dev/null -s /sbin/nologin

before editing the apache2.conf I would recommend you make a but up of the file
cp /etc/apache2/apache2.conf /etc/apache2/apache2.conf.dontmessthisup

Now add the user to the apache2.conf file for Apache to use.
sudo nano /etc/apache2/apache2.conf

add the following lines to the apache2.conf
User apache
Group apache

save and close the file and then you'll need to restart Apache to take affect
sudo /etc/init.d/apache2 restart

Another good security tip for websites that have transactions and other internet sales related activity is to change the logging to use syslog this can be done by editing apache2.conf to change the ErrorLog line from;

ErrorLog /var/log/apache2/error.log

To syslog

ErrorLog syslog:local7

This will log to syslog now as local7
You will need to add a few lines to syslog.conf for it to handle the new logging information.

Again I recommended you create a copy of the syslog.conf before editing it.
cp /etc/syslog.conf /etc/syslog.conf.dontmessthisup

Now to edit the syslog
sudo nano /etc/syslog.conf

At the bottom of the file add the following lines
#Apache Logging
local7.* /var/log/apache2/error.log

you'll need to restart the syslog for the change to take affect
sudo /etc/rc2.d/S10sysklogd restart

you can now test the syslog by creating a message into the log
logger -p local7.debug "this is working"

we can now check the log
cat /var/log/apache2/error.log

You should now see your test line something like this
server root: this is working

Reblog this post [with Zemanta]

Sunday, 21 June 2009

How to write your own capacity management tool

Nothing is better that a free tool, except maybe one you've made yourself.
So here is one I made to collect all the drive space and free space on them into a SQL database.

First of all we need to collect the data from each PC, how you could use a manual list of all the servers but when I was writing this list I was feeling lazy to I used the NET VIEW command to create the list for me, then using WMIC commands to query all the PC for space, just take the code below and save it to a bat or cmd file and you can create the report just by double clicking.

for /f "delims=\ " %%i in ('net view ^| findstr "\\"') do @echo %%i >> servers.txt
del c:\reports\SRVSPACE.CSV
FOR /F %%A IN (servers.txt) DO (
WMIC /Node:%%A LogicalDisk Where DriveType="3" Get DeviceID,FileSystem,FreeSpace,Size /Format:csv | MORE /E +2 >> c:\reports\SRVSPACE.CSV

del servers.txt

you could collect more details or change what is collected this was just an example, remember that anything that you can collect in CSV format is easy to import into SQL where reporting services can provide charts and reports that can be scheduled to be sent to your inbox if you so wish.

in the first part we showed the example of creating the csv file that we are importing into SQL so now we need a database and a table to store it.

The following script creates the database the table and the SQL job that will run the import, by creating more table and columns you can add more reports and create a powerful tool to monitor your network not just your hard drives.

--Create DATABASE Capacity_DB

--create table Capacity
USE Capacity_DB

--create columns
(Node VARCHAR(40),
Drive VARCHAR(40),
Format VARCHAR(40),
Freespace VARCHAR(40),
TotalSpace VARCHAR(40),
Collection_Date VARCHAR(40))


use Capacity_DB
--create procedure that will be late used by SQL job
-- create temp table
CREATE TABLE #cmimport
(Node VARCHAR(40),
Drive VARCHAR(40),
Format VARCHAR(40),
Freespace VARCHAR(40),
TotalSpace VARCHAR(40))

--import from CSV file
INSERT #cmimport
FROM 'c:\reports\SRVSPACE.CSV'

--copy data into capacity table
INSERT INTO Capacity (Node, Drive, Format,Freespace, TotalSpace)
SELECT * FROM #cmimport

--clean up
IF OBJECT_ID('tempdb..#cmimport', 'U') IS NOT NULL
DROP TABLE #cmimport

--update missing dates on new imports
update Capacity
set Collection_Date = (current_timestamp)
where Collection_Date is null

-- Create SQL job to trigger procedure
USE [msdb]

/****** Object: Job [capacity_import] ******/
SELECT @ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]]] Script Date: 06/21/2009 17:20:29 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND

EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback


EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'capacity_import',
@description=N'No description available.',
@category_name=N'[Uncategorized (Local)]',
@owner_login_name=N'SA', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object: Step [step1] Script Date: 06/21/2009 17:20:29 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'step1',
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'use capacity_db
exec sp_CapacityImport
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Daily',
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
GOTO EndSave


Now you have the database and the table plus the import job set to run once a day, if you place the script from part one on a schedule to run once a day as well then you'll have a working import.

The final part is to create a report from this data, you can ether create a job that will create a summery version and email it too you or create a report using SQL reporting services.

If you aren't already experienced with SSRS (SQL Server Reporting Service) then
Steve Joubert's posting should help you out.

Exactly the same can be use to MySQL or Oracle personally I would recommend using MySQL for this as gives you the greatest number of supported platforms however this example I was using Microsoft SQL 2008

Reblog this post [with Zemanta]

Thursday, 11 June 2009

Site A to Site B tunnel

Keeping Site to Site traffic simple has never been simple and keep it secure and at the same time reducing the packets flowing over it is not easy ether.

So what types of traffic will be going from site A to B?
Active Directory traffic and Replication
Microsoft SQL

To keep your traffic as simple as you can I would always recommend using a proxy at each end of the site to site VPN however for some traffic like SQL Replication might not be such a good idea because of the delay it can add, but still I would try to resolve the issue with the proxy then work around it.

Now to give you an idea why I would do this have a look at how many open ports you have with Active Directory

RPC endpoint mapper 135/tcp, 135/udp
Network basic input/output system (NetBIOS) name service 137/tcp, 137/udp
NetBIOS datagram service 138/udp
NetBIOS session service 139/tcp
RPC dynamic assignment 1024-65535/tcp
Server message block (SMB) over IP (Microsoft-DS) 445/tcp, 445/udp
Lightweight Directory Access Protocol (LDAP)389/tcp
LDAP ping 389/udp
LDAP over SSL 636/tcp
Global catalog LDAP 3268/tcp
Global catalog LDAP over SSL 3269/tcp
Kerberos 88/tcp, 88/udp
Domain Name Service (DNS) 53/tcp1, 53/udp
Windows Internet Naming Service (WINS) resolution (if required) 1512/tcp, 1512/udp
WINS replication (if required) 42/tcp, 42/udp

As you can imagine this is much harder to trouble shoot and track of then PPTP on tcp 1723
This is the reason why I would suggest setting up proxy at each end of the VPN. That's not to say you can't open up the ports but to keep it secure you'll need to know the source and destination of all packets, and this can be something of an over head on your configuration.

SQL server uses 1433 and 1434 however this can change depending on settings of the server but for the most part is quite easy.

So lets be begin.
First of all we should have a VPN between the sites the one I like best is a VPN Tunnel as this allows you not only to have the VPN but setup the interfaces with all the ACL rules you want.

I'll use a quite well known example I think, from Richard Deal's Complete Cisco VPN Configuration Guide, I found it a nice bit of night time reading.

RouterA Configuration:
RTRA(config)# crypto isakmp policy 10
RTRA(config-isakmp)# encryption aes 128
RTRA(config-isakmp)# hash sha
RTRA(config-isakmp)# authentication pre-share
RTRA(config-isakmp)# group 2
RTRA(config-isakmp)# exit
RTRA(config)# crypto isakmp key cisco123 address no-xauth
RTRA(config)# crypto ipsec transform-set RTRtran esp-aes esp-sha-hmac
RTRA(cfg-crypto-trans)# exit
RTRA(config)# crypto ipsec profile VTI
RTRA(ipsec-profile)# set transform-set RTRtran
RTRA(ipsec-profile)# exit
RTRA(config)# interface tunnel 0
RTRA(config-if)# ip address
RTRA(config-if)# tunnel source
RTRA(config-if)# tunnel destination
RTRA(config-if)# tunnel mode ipsec ipv4
RTRA(config-if)# tunnel protection ipsec VTI
RTRA(config)# interface Ethernet0/0
RTRA(config-if)# ip address
RTRA(config-if)# exit
RTRA(config)# interface Ethernet 1/0
RTRA(config-if)# ip address
RTRA(config-if)# exit
RTRA(config)# ip route tunnel0

RouterB Configuration:
RTRB(config)# crypto isakmp policy 10
RTRB(config-isakmp)# encryption aes 128
RTRB(config-isakmp)# hash sha
RTRB(config-isakmp)# authentication pre-share
RTRB(config-isakmp)# group 2
RTRB(config-isakmp)# exit
RTRB(config)# crypto isakmp key cisco123 address no-xauth
RTRB(config)# crypto ipsec transform-set RTRtran esp-aes esp-sha-hmac
RTRB(cfg-crypto-trans)# exit
RTRB(config)# crypto ipsec profile VTI
RTRB(ipsec-profile)# set transform-set RTRtran
RTRB(ipsec-profile)# exit
RTRB(config)# interface tunnel 0
RTRB(config-if)# ip address
RTRB(config-if)# tunnel source
RTRB(config-if)# tunnel destination
RTRB(config-if)# tunnel mode ipsec ipv4
RTRB(config-if)# tunnel protection ipsec VTI
RTRB(config)# interface Ethernet0/0
RTRB(config-if)# ip address
RTRB(config-if)# exit
RTRB(config)# interface Ethernet 1/0
RTRB(config-if)# ip address
RTRB(config-if)# exit
RTRB(config)# ip route tunnel0

So once you have your tunnel up and running we can setup the access lists on the tunnel interfaces remember that you must have permitted GRE protocol on the WAN interfaces for this to work.

In this next example we are using a PPTP connection on both of the Active Directory controllers so that only PPTP traffic is needed to flow over the tunnel, the domain controllers are address on the 3rd IP at each site x.x.x.3

access-list 108 permit tcp host host eq 1723

This can also be used by file server with DFS if remote access and routing is setup on both to use PPTP between them or via the proxy.
DFS by default uses a number of ports that I would not recommend opening for security reasons in the same way Active Directory does.

In this final part I've allowed SQL to travel without the PPTP connection between the SQL servers at each site on IP 50 of the range x.x.x.50

access-list 108 permit tcp host host eq 1433
access-list 108 permit tcp host host eq 1434

Now its important to note that if you are using this in a fail over your going to need to allow all clients to connect to SQL and if its no part of the PPTP then you'll have to set the ACL with a larger allowance for sources.

access-list 108 permit tcp host eq 1433
access-list 108 permit tcp host eq 1434

Another note to this if your going to send the SQL traffic only in the tunnel without PPTP because of the extra delay in response times, secure it by using certificate authority and force encryption on the Server protocols to make it more secure, however this will mean you'll need to permit tcp 445 for the SQL as well.

now your rule are created you can simply apply them to the tunnel interface

interface tunnel 0
access-list 108 out

You should now be done and secure.

Best practice is also to have access-list on the LAN interface to reduce the traffic on the router but this you will need to know more about you network to setup.

Reblog this post [with Zemanta]

Sunday, 7 June 2009

SQL undeletable jobs

It came to my attention a few weeks ago while where implementing an enterprise automation that jobs create from T-SQL script related to maintenance plans sometimes couldn't be deleted, this also applies to some rare times when maintenance plan deleted but the job didn't.

The result is a job that can't be deleted because its is linked to an entry in MSDB where they are still held.

When the maintenance plan had ether been deleted or was not viewable as it had been created by T-SQL, sadly T-SQL doesn't create the XML file needed for the maintenance plan to be seen from SQL management studio.

As a result you can not delete the job without first deleting the links to it in the MSDB these can be found in the following three tables.

sysmaintplan_subplans, sysmaintplan_plans, sysmaintplan_log

these three tables have to be cleaned up before you can delete the job as it is listed in one or more of them, luck for us there is a common id column call PLAN_ID

So first we need to find the plan_id of our job, if you've been using descriptions on your maintenance job this will be easy if not then you might want to open them up and add descriptions as other wise you'll have a lot of plans and no way to identify them.

Querying the three tables will let us see how many it exist in, this takes but a few seconds.

use msdb
select * from sysmaintplan_subplans

select * from sysmaintplan_plans

select * from sysmaintplan_log

with the output we where able to identify the plan_id as it was the only one without a description, and from there could delete the plan_id from the tables like so.




after this was done we where then able to delete the job as there was no relation to it in the MSDB any longer.

Wednesday, 3 June 2009

Router performance

Router performance can be affected by a number of things as there are several different aspects involved.

Resource issues, such as the performance of the CPU and RAM
Router IOS configuration changes
Bandwidth management: Quality of Service (QoS)
Layer 1 network issues: Bad circuits or cables
Errors and failure of the router hardware

Bandwidth management: Quality of Service (QoS)

To resolve performance issues or improve performance, you may need to implement some form of bandwidth and/or traffic management. This is commonly called (QoS) Quality of Service but there are many different types of QoS, and picking the right one for you depends on what your doing but one thing is for sure you should try to reduce traffic to only permitted types, as you don't want high utilization of interfaces.

One quick way to see what the utilization is on your LAN or WAN circuit is to use the show interfaces command and look for the TX/RX Load as well as the five-minute input/output rate. Here are some examples of the show interfaces output that I am referring to:

reliability 255/255, txload 1/255, rxload 1/255

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

I have personally used these to determine what is maxing out a router’s circuit and to see in what direction that traffic is flowing, but if you want to monitor for longer times and get detailed source and destination I would suggest you use IP Accounting
there some tools such as Cisco IP Accounting Fetcher and Net-Sense that will create reports from the information collected by IP accounting

Layer 1 network issues: Bad circuits or cables

Many times, the reason that users are complaining about performance is that there is a Layer 1 (Physical) network issue. For example, there is an issue with an Ethernet LAN cable or a T1 WAN connection. Errors that cause slow performance are especially common with WAN connections that span long distances.

The best way to check to see if your LAN or WAN connections are causing the slow performance you can use "show interfaces summary" command to see if you have dropped packets or errors

Errors and failure of the router hardware

While the show interfaces command might find issues with your connections, those errors could also be caused by your router hardware. For example, you could have a bad HWIC T1 card that is causing slow performance and causing errors to increment in the "show interfaces" output.

If this is a WAN circuit, many times, your carrier will assist you in testing and troubleshooting that circuit.

personally I'm a fan of AdventNet ManageEngine OpUtils this has been a tool I've liked for sometime and works well for having a single interface to many devices as it likely you won't just have cisco hardware.

Saturday, 30 May 2009

Analyze a blue screen

I was having fun the other day with some virtual server on my laptop when I noticed I was late for a meeting so i quickly shutdown the laptop and just as it was getting close to finishing it blue screen... so I let it finish creating the memory dump and off to the meeting I when.

Later that day when I have five minutes I once again booted up the laptop and started to have a look at what caused my blue screen.

to analyze a blue screen there are simple steps
1) download Debugging Tools for Windows plus the Symbols Pack if working offline or set symbol path to

2)open the dump file and run !analyze -v or kb for shorter output

3)switch to processor 1 from 0 using ~1 or however many processors you have.

So once you've installed the debug tools for windows you need the symbol pack or if your connected you can use the online symbols
I always like to use the online ones as i know these are more up to date and saves me needing another 200mb to 600mb of disk space.

I then opened up the memory.dmp normally located under c:\windows or c:\winnt depending on the version of windows you have or it maybe under another directory if you changed the install location or memory dump location, anyway the default is %SystemRoot%\MEMORY.DMP

Loading Dump File [C:\Windows\MEMORY.DMP]
Kernel Summary Dump File: Only kernel address space is available

Symbol search path is:;C:\Windows\Symbols SRV*c:\websymbols*
Executable search path is:
Windows Vista Kernel Version 6000 MP (2 procs) Free x86 compatible
Product: WinNt, suite: TerminalServer SingleUserTS
Built by: 6000.16830.x86fre.vista_gdr.090302-1506
Machine Name:
Kernel base = 0x82000000 PsLoadedModuleList = 0x82111e10
Debug session time: Thu May 28 16:40:54.534 2009 (GMT+2)
System Uptime: 1 days 7:51:45.536
Loading Kernel Symbols
Loading User Symbols

Loading unloaded module list
* *
* Bugcheck Analysis *
* *

Use !analyze -v to get detailed debugging information.

BugCheck A, {0, 1b, 0, 8202915c}

Probably caused by : ndis.sys ( ndis!ndisAcquireMiniportPnPEventLock+60 )

Followup: MachineOwner

As per the prompt I type !analyze -V
and now I get the processes that where running at the moment of the blue screen
in this example the cause that you can see bellow was ndisAcquireMiniportPnPEventLock casting my mind back to the point when i was turning off the laptop i realized i had picked it up from the docking station and the network cards was change as a result just seconds before the blue screen and this was the cause.

* *
* Bugcheck Analysis *
* *

An attempt was made to access a pageable (or completely invalid) address at an
interrupt request level (IRQL) that is too high. This is usually
caused by drivers using improper addresses.
If a kernel debugger is available get the stack backtrace.
Arg1: 00000000, memory referenced
Arg2: 0000001b, IRQL
Arg3: 00000000, bitfield :
bit 0 : value 0 = read operation, 1 = write operation
bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status)
Arg4: 8202915c, address which referenced memory

Debugging Details:

READ_ADDRESS: 00000000


8202915c 803902 cmp byte ptr [ecx],2




TRAP_FRAME: a2e2da94 -- (.trap 0xffffffffa2e2da94)
ErrCode = 00000000
eax=00000000 ebx=a654ee30 ecx=00000000 edx=82132300 esi=a654ed78 edi=a654ee00
eip=8202915c esp=a2e2db08 ebp=a2e2db58 iopl=0 nv up ei pl zr na pe nc
cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010246
8202915c 803902 cmp byte ptr [ecx],2 ds:0023:00000000=??
Resetting default scope

LAST_CONTROL_TRANSFER: from 8202915c to 8208fdc4

a2e2da94 8202915c badb0d00 82132300 82090fe6 nt!KiTrap0E+0x2ac
a2e2db58 81e0ed7b 00000000 00000000 00000000 nt!KeWaitForSingleObject+0x1b5
a2e2db84 81eda107 00b520e8 a2e2dbf8 85b520e8 ndis!ndisAcquireMiniportPnPEventLock+0x60
a2e2dc20 81e2b231 85b520e8 00000000 00000000 ndis!ndisPnPNotifyAllTransports+0xa2
a2e2dca4 81ee7749 85b520e8 00000000 00000000 ndis!ndisDevicePnPEventNotifyFiltersAndAllTransports+0xc5
a2e2dcf8 81ee7b5f 8549bdb8 8549be4c 00000004 ndis!ndisSetPower+0x5ef
a2e2dd20 82050b86 8549be4c 83e4db30 00000000 ndis!ndisPowerDispatch+0x1a3
a2e2dd7c 8222553c 87166db0 a2e26680 00000000 nt!PopIrpWorker+0x40f
a2e2ddc0 820915fe 82050773 87166db0 00000000 nt!PspSystemThreadStartup+0x9d
00000000 00000000 00000000 00000000 00000000 nt!KiThreadStartup+0x16


81e0ed7b 8b4dfc mov ecx,dword ptr [ebp-4]


SYMBOL_NAME: ndis!ndisAcquireMiniportPnPEventLock+60



IMAGE_NAME: ndis.sys


FAILURE_BUCKET_ID: 0xA_ndis!ndisAcquireMiniportPnPEventLock+60

BUCKET_ID: 0xA_ndis!ndisAcquireMiniportPnPEventLock+60

Followup: MachineOwner

Still I wasn't 100% sure this was the only problem as I'm luck enough to have a dual core laptop so I needed to check the other processors in case they where running something at that time as well, so using the ~1 command I switched to the other core, by the way processor count from zero up so second processor is 1.

I ran the !analyze -V again

1: kd> !analyze -v
* *
* Bugcheck Analysis *
* *

An attempt was made to access a pageable (or completely invalid) address at an
interrupt request level (IRQL) that is too high. This is usually
caused by drivers using improper addresses.
If a kernel debugger is available get the stack backtrace.
Arg1: 00000000, memory referenced
Arg2: 0000001b, IRQL
Arg3: 00000000, bitfield :
bit 0 : value 0 = read operation, 1 = write operation
bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status)
Arg4: 8202915c, address which referenced memory

Debugging Details:

READ_ADDRESS: 00000000


8202915c 803902 cmp byte ptr [ecx],2




LAST_CONTROL_TRANSFER: from 823a94a3 to 8208191a

88757928 823a94a3 ffd050f0 00000040 8431d648 nt!READ_REGISTER_ULONG+0x6
88757948 823a98e5 88757988 8206f94d 00000000 hal!HalpQueryHpetCount+0x4b
88757950 8206f94d 00000000 820709e5 00000001 hal!HalpHpetQueryPerformanceCounter+0x1d
88757958 820709e5 00000001 0000003c 8875c2a4 nt!EtwpGetPerfCounter+0x8
88757988 8206f5e2 0000003c 887579d0 887579b0 nt!EtwpReserveTraceBuffer+0xce
88757a1c 8206f41b 00040007 00000000 0000002b nt!EtwpTraceMessageVa+0x187
88757a40 8d36456e 00040007 ffffffff 0000002b nt!WmiTraceMessage+0x22
88757a68 8d36555c 00040007 ffffffff 00000020 smb!WPP_SF__guid_+0x20
88757aa8 8d36593f 848594e8 00000002 00000000 smb!SmbBatchedSetBindingInfo+0x152
88757ac0 8c339a32 84425868 84425848 87b137f8 smb!SmbAddressDeletion+0x5d
88757aec 8c339f01 8c33c1a0 84425828 00000000 TDI!TdiNotifyPnpClientList+0x132
88757b10 8c33a2f4 84ac0850 00000000 8ee95338 TDI!TdiExecuteRequest+0x175
88757b48 8c33a547 00425828 0000000c 88757bd4 TDI!TdiHandleSerializedRequest+0x1aa
88757b58 8ee8e11a 84425828 00000010 88757c98 TDI!TdiDeregisterNetAddress+0xf
88757bd4 8ee8e513 85009270 00000000 874b3938 tdx!TdxProcessAddressChangeRoutine+0x22e
88757bf0 829a62a6 00000000 88757c98 88757ca0 tdx!TdxNaAddressChangeEvent+0x7d
88757c58 8eec8460 88757c8c 823a4f00 85b0d908 NETIO!NsiParameterChange+0x73
88757cf8 8eec9860 846438c0 8749e9e4 88757d2c tcpip!IppNotifyAddressChangeAtPassive+0x12c
88757d08 829a14d1 846438c0 820fde7c 873bae58 tcpip!IppCompartmentNotificationWorker+0x11
88757d2c 8218c87c 873bae58 8749e9e4 8749d610 NETIO!NetiopIoWorkItemRoutine+0x2f
88757d44 82078fc0 8749d610 00000000 83e9d828 nt!IopProcessWorkItem+0x2d
88757d7c 8222553c 8749d610 8875c680 00000000 nt!ExpWorkerThread+0xfd
88757dc0 820915fe 82078ec3 00000001 00000000 nt!PspSystemThreadStartup+0x9d
00000000 00000000 00000000 00000000 00000000 nt!KiThreadStartup+0x16


8d36456e 83c420 add esp,20h


SYMBOL_NAME: smb!WPP_SF__guid_+20



IMAGE_NAME: smb.sys


FAILURE_BUCKET_ID: 0xA_smb!WPP_SF__guid_+20

BUCKET_ID: 0xA_smb!WPP_SF__guid_+20

Followup: MachineOwner

From the second processor I could only see the SMB process or (simple message block) feeling happy that it was network card I left it as I unplugged the network card too fast.

However the steps are the same for any debug on windows and with server remember to check all the processors on your system.

Now to recap
1) download Debugging Tools for Windows plus the Symbols Pack if working offline or set symbol path to

2)open the dump file and run !analyze -v or kb for shorter output

3)switch to processor 1 from 0 using ~1 or however many processors you have.

Note: processes starting with NT is system kernal
NDIS is windows libery for network drivers

Sadly knowing what cause your blue screen doesn't always help you as it might be something like a driver that hasn't been updated yet so your still left waiting... however at least you know what your waiting for.

I hope after reading this you'll fear the blue screen a little less and even see it as a challenge not something to be scared of.

Saturday, 16 May 2009

Going Green

Most companies don't have any form of energy policy yet covering computers and there operation, a few companies have basic policy of turning off the workstation however this is just a start, and most employees don't follow it closely.

So here is how to begin, you can improve the energy rating of your network.

Consolidation of servers coupled with cloud computing is an affective way to reduce power consumption by reducing the number physical devices but this isn't all you can do.

So I'm going to save you sometime and give you a few points where you can make changes to reduce the energy consumption of your network.

consider replacing all older hardware with more energy efficient hardware such as stolid state drives for laptops, where possible replace workstations with solid stat drives or change over to terminal based sessions as this negates the need for local drives and reduces memory requirement thus saving energy and also offers better security as there is no data stored locally if the workstation is stolen.

Disable all but the most basic of screen savers as this heavy graphical application increasing the load on the graphic card CPU and boost the energy consumption.

Allow inactive devices, laptops and workstations to sleep or hibernate by policy.

In the server farms enabling dynamic processor switching can also save a large amount of energy as few of us use the CPU at 90% all of the time.

Consolidate switches and disable inactive ports for both power and security reasons.

If all these points are followed you could lower the total energy consumption by 30 to 40 percent.

Wednesday, 6 May 2009

Cisco logical interfaces

Cisco routers just like the switches support VLAN and you can put many of them on to one physical interface and here is how it can be done.

Remove the IP address from the physical interface, and turn it on,

no ip address
no shutdown

Create a logical interface to be assigned to one of the VLANs

interface fastethernet 0/0.X

You can change the ‘fastethernet’ to the type you have and the ‘0/0’ with the interface number that you are using.
X represent the logical interface number since this has no real value I tend to use the number of the VLAN so that its easier to follow.
For example, for the logical interface that you will use for VLAN 5 use ‘int fastethernet 0/0.5'. This way, you will easily know which interface refers to which VLAN.

Assign the logical interface to a VLAN number

encapsulation XXX Y where XXX is the encapsulation type you are using for the VLANs (ex: isl or dot1q which is 802.1Q) most commonly used one is dot1q and Y is the VLAN number that this logical interface will be assigned to.

interface fastethernet0/0.5
encapsulation dot1q 5

Now you have the interface but still no IP
Assign an IP address to the logical interface is easy its the same as assigning IP to physical interface

ip address

Now repeat the steps for each VLAN that you want, I've created three bellow as an example I've created for VLAN 5,10 and 15

interface fastethernet0/0.5
ip address
encapsulation dot1q 5

interface fastethernet0/0.10
ip address
encapsulation dot1q 10

interface fastethernet0/0.15
ip address
encapsulation dot1q 15

Configure static or dynamic routing in the way you need it.
you treat the logical interfaces the exact same way you treat the physical interfaces when doing the routing, so really this isn't that hard.

If you like some VLANs (ie, networks) not to participate in the routing, you can either not include them in the routing protocol or not assign a logical interface for them.

Configure access-lists in the way you find appropriate to filter the traffic going from one VLAN to another and apply them to the logical interfaces the same way you apply them to physical interfaces, this might be that you don't want them to see one another at all or just one way depending on what you want.

Common one is that management vlan can see the others but others cannot see managment vlan or one another except on some needed services.

some things not to leave out or forget about is...

If you plan to let routing updates go through the router from one VLAN to another, it is necessary to turn off split-horizon. Split-horizon technology forbids the update coming from one interface to go out the same interface. By the way its unlikely you even had it turned on but you can check to be sure.

no ip split-horizon

Don't forget without the access-lists, there would not be much point of doing VLANs and inter-VLAN routing because without the VLANs everyone would be able to communicate with everyone else.

Lastly nearly all switches support trunks on FastEthernet, and do not support the older Ethernet with 10Mbps.

Thursday, 30 April 2009

Installing Open SSH on Ubuntu

By default when you install Open SSH you'll be running on port 22 along with some other things that are not considered to be best practice.

If you have taken over an existing SSH server then you'll need to know the version and port its running on.
run the sudo netstat -tulpn will give you a list of running application with internet port they are using and ssh -v will give you the version that is running.

If OpenSSH is running then you should see it when you run the sudo netstat -tulpn you can also check the package is installed by typing dpkg --list | grep openssh-server equally you might want to up date the package this is also easy to do using the sudo apt-get install openssh-server command, if there is a new version available you will be prompted to install it and if the package isn't installed the same command will prompt you to install it.

now lets get to work... first thing is its not a good idea to be running on well known port numbers so you'll need to edit the config file, some people use vi editor for this I like nano better, so if your used to using vi just put vi where you see nano... for those of you are used to using windows vi and nano are text editors much like notepad and edit from dos.

Editing the configuration file.
sudo nano /etc/ssh/sshd_config

With in the first few lines you will see Port 22 this you should change to something else this is no such thing as a good number but try to make sure you don't use a port you'll need for something else later.

Second you and change the IP addresses and interfaces OpenSSH will bind too... if like you have a mult IP network with a subnet just for network management then you'll most likely want it too bind only to the management IP simply remove the # from in front of ListenAddress and replace the with the IP you want to bind too.

If on the other hand you are using one IP for both the management and the public access then I'd recommend changing the Root access to NO this can be found on the line marked # Authentication: change the PermitRootLogin yes to PermitRootLogin no

I've never been happy with the standard 768 bit keys you can change the size and I often do to 2048 just change the list ServerKeyBits 768 to ServerKeyBits 2048

And lastly its best to use a Banner on the system as well reminding people that its against the law to hack or use systems without permission, to do this remove the # from the Banner line and point it too your banner file and example is like this Banner /etc/banner.txt

Now you've made your changes exit and save them, it will most likely be needed for you to restart you OpenSSH before all of the settings will take affect so you might need to use one or more of the following.

To stop ssh server sudo /etc/init.d/ssh stop
To start sshs server sudo /etc/init.d/ssh start
To restart ssh server sudo /etc/init.d/ssh restart

Wednesday, 29 April 2009

Windows installer cache

I was having fun the other day installing SQL service packs and I found this little fix that I'd like to share it with you all.

When you are missing file like the MSI or MSP from the windows installer cache you can have some problem with patching or even removing SQL 2005
Symptoms SQL 2005 service pack install fails/ SQL 2005 uninstall fails

Example we’ll pretend I have a SQL 2005 server with SP1 install and I’m going to install SP2… (Sounds simple enough right?) During the install some of the components fail in this example I’ll say it’s my SSIS but it could be any other component as well, Database engine, Notification Services etc.

So after it’s failed I open the hot fix log folder to see what happened in this case
C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\DTS9_Hotfix_KB921896_sqlrun_dts.msp

Now I start looking for errors first one of note is a line
MSI (s) (B8:1C) [13:28:24:254]: Original package ==> C:\WINDOWS\Installer\e893c17.msi

Check the c:\windows\installer folder to see if this file exists if it doesn’t find the sqlrun_dts.msi from the install CD and copy it to the windows installer folder and then rename it as the log name shows, this unique name is created at install time so the name will be different on each server, sometimes on the same server it can be different between instances as well.

When you are missing this file you will not be able to install or uninstall the Microsoft SQL 2005 equally you will need the MSP (Microsoft Patch) file as well, if its missing again your find the MSP file in the log

MSI (s) (B8:1C) [13:28:24:286]: Opening existing patch 'C:\WINDOWS\Installer\e893c8b.msp
Check it exist in the windows installer folder without it the install will fail.

If you are missing the MSP because I had SP1 before I need to get the MSP file from SP1 so in this case I need to run the service pack 1 with the /X switch to extract the files and once this is done copy the sqlrun_dts.msp from it to the windows installer directory and rename it as the name shows in the log, again this is unique to each server.

Now I’ve corrected all the missing files I run the service pack 2 again… and I have a successful install with no errors.
These steps also apply to all other SQL 2005 components and SharePoint services as well along with Microsoft office.

Linux in Enterprise

Why would you use Linux in your enterprise?
Well apart from the cost saving there are some really nice things that can be done but lets start at the beginning and work our way to that.

Most if not all Linux admins are former UNIX admins, and for them is a little strange to cross over to having GUI but frankly i don't know any admin in windows or UNIX that uses the GUI if he has a command line option to do the same thing.

Now those of you are windows admins will be asking yourself why would you use Linux.

1) the cost saving.

2) the access to open source solutions.

3) the security, Linux and UNIX have always been more secure than windows so for DMZ and public facing server they are stable and secure.... Microsoft has been working hard to catch up on this but frankly are still behind.

4) better resource management, unlike windows you won't be buying new hardware with each version.

5) if you are looking at visualization you want a stable host for your guests windows patching and reboots make it hard to do at a lower level, their high end products can do this well but if you don't have the budget then you might feel a little left out.

Now there are many versions of Linux and there is no such thing as a bad choice on this front but I'm going to just cover the two I like most SuSE and Ubuntu.

SuSE now owned by novel has picked up many of the Novel management tools such as Xen and makes it perhaps the strongest player in the large environment and deployment.

Ubuntu is missing the system management that SuSE has picked up from Novel but at the same time there are many open source tools that can be used to overcome this issue.

So what could you use Linux for well my top list of uses are web servers, DNS servers, email servers and database servers.

Apache on Linux is just great its simple and stable every little work needs to be done once it setup to keep it running something IIS7 is still trying to catch up on as even Microsoft added PHP support to IIS something Apache had for years.

Postfix/Sendmail are great mail server and better for edge deployment as you have them setup as I do with the second and third MX record so should exchange or domino be down in your domain you still have a mail server that is under your control that will store the mail until the problems with your normal mail system can be fixed. (something to many companies are lacking)

BIND is DNS server that is just perfect, its easy to backup and configure and can be move from one server to another quite easy something that can't be said for windows DNS server yet.

MySQL/Oracle Linux does support other database types as well but these are the most common and the performance of both can be seen every time you browse internet, even Google is powered by these. These are also database servers that scale up much better then Microsoft SQL 2008 even, there has been many talk about this failing from Microsoft but as yet no light at the end of the tunnel.

Linux might not yet be the desktop solution for you yet but I have to admit I have changed all my administration workstation over to running Linux and use virtual box to run application such as Microsoft office. (why would i do this i hear you say) well apart from the fact i don't want to spend all day fixing my workstation is also give me access to some great open source tools for problem finding that just don't work on windows, and as always you know the system best if you use it every day.

I would recommend all admins to use a Linux workstation and run a windows as a virtual PC for those windows application you just can't live without... and trust me there aren't that many once you start using it.

Sadly this posing is already to large to go into detail so I'll just have to cover more in the next posting.

Friday, 10 April 2009

Remote desktop software good or bad?

What is a good remote desktop management software, I heard this question this week so I'm forced to answer it.

Well like all good question there is no one answer, its like when some one asks me what is a good laptop? what do you need it for is always the question and the same applys to remote desktop management software.

here are some points to consider before you decided on the product to use.
1) most operating system have one or more forms of remote desktop already so are you using this just for legacy desktops and would it be more cost affective to upgrade them?

2) how is easy is the product to deploy, can it be scripted or automated to avoid large amount of administrative overhead? again most have this function now.

3) how secure is it, can you lock it down to admin groups and IP's as well as just encrypting the traffic, remember that was is easy for you to get onto desktops also makes it easier for other to get onto them too.

4) is there any mobile device support.

5) is it a peer to peer connection of is it a relay thought 3rd party provider? as these tools become more popular I expect the attempts to break into them will increase.

Now the scary bit, most if not all of these tool have file transfer very handy for your helpdesk and also very dangerous too, with one email or phone call i can setup a connection to any desktop in the world.

As a security test I setup a connection to a business a few weeks ago who told me that there was no way for anyone to get the data out of the building all USB's have been disable and email was scanned, and no FTP was permitted. The administrator seems quite sure i couldn't get the information out so after setting up a remove session with on friendly user I proved that any outside part that has just a little help from a user can not only access the system but then copy the data to any remove location using any open port on the firewall like HTTP.

After the demo of this the local team changed the firewall to ban all known remove desktop software company sites but there are more they haven't found yet and new ones spring up each week.

Best advice I can offer you is to permit only a limited number of sites and disable all ActiveX components on browsers in order to try and prevent this but frankly it an open door....

Try not to lose to much sleep over it.

Sunday, 5 April 2009

What message media do you trust?

If your a large enterprise then you undoubtedly have need of a mobile solution for email and contact solution, now one of the first thing I hear when I say this is Blackberry.

It it really a good idea to have a blackberry in your enterprise ?

Well I'm still undecided, but lets ask some question first do you allow business critical files to be sent to your customer over the internet unencrypted ?

Would you worry that someone could read them ?

Imagine for a moment that you have all of your email in a pop account and that your ISP could read it, are you happy to live with this?

Because blackberry is kind of the same its another middle man between your servers and the mobile device your using, now of most business they don't consider this to be a mission critical thing to secure there mobile devices but I am under the opinion that is another security hole.

Not to mention something that your administrator team have yet another program to look after, the simpler solution would be to use the extension of the messaging platform you have already.

Such as Microsoft Exchange Direct Push (was added to 2003 SP2) or IBM Lotus iNotes Ultra-light depending on your environment.

If on the other hand you need more than Microsoft Windows Mobile and Apple iphones for email then you could look at Intellisync from Nokia it again acts as a direct link and allow you to bring the wide range of Nokia phones into you list of enabled devices.

There are other products that offer these function as well but remember make sure the device is talking to the server directly, going thought a provider give you just another weakness in your network and this one is outside of your control.

Frankly I have allot of problem believing in most products out there as they do not ISO 27001 some have passed ISO 9001 but this is a very basic check.

So some simple rules for you messaging administrators out there use SSL with all devices no exceptions.

Make sure the product your using connect directly from device to server, not thought some third party infrastructure.

And finally ask the provider about what security standard the product has passed and if they can't tell you don't use it.

Thursday, 2 April 2009

DHCP automated failover

Today I had one of those better days that I'd like to share with you, there a nice tool call dhcpcmd you can get it from Microsoft it was release with NT4 and later with windows 2000 and its still works on vista and 2008 the nice this about this is that it can do something simple called "GetVersion" might not seem like a really important thing but lets explain what it can be used for.

There are three basic ways to setup DHCP first is two server with half the scope on each and if one fails remove the excluded range and continue to server the ip range from one server, this works but needs manual effort.

Second is to setup a cluster resource for you DHCP this works quite well but your DHCP jet database is not cluster aware so sometimes you need to restart your DHCP server service to get it working after it fails over, again that's manual effort.

Third option two servers setup and one with DHCP server service stopped until first server fails, and again manual effort to start it.

So far you start to see a theme and is allot of manual effort and like all manual effort it will need you to do this fail over at early morning for sure because that's how it goes in the IT world when something breaks.

Now when I came across DHCPCMD even just its ability to GetVersion was enough, let me show you with the first option where have the scope on two server with excluded ranges, I have the following in a script file on one server doesn't even have to be one of the nodes, and it has scheduled to check every 5 minutes using this script.

And as you'll see I've put some basic responses in for a failure.

@echo off
dhcpcmd GetVersion
if errorlevel 1 goto Server1_Failed
dhcpcmd GetVersion
if errorlevel 1 goto Server2_Failed

netsh dhcp server \\winserver-2 scope add excluderange
netsh dhcp server \\winserver-1 scope add excluderange
goto All_Done

rem --- alert
net send Administrator "Warning: DHCP server 1 failure failing over to second server"
netsh dhcp server \\winserver-2 scope delete excluderange
goto All_Done

rem --- alert
net send Administrator "Warning: DHCP server 2 failure"
netsh dhcp server \\winserver-1 scope delete excluderange
goto All_Done


Now the second and third option are almost the same where you want to start a service and or restart a service so here is an example

@echo off
dhcpcmd GetVersion
if errorlevel 1 goto Server1_Failed
goto All_Done

net send Administrator "Warning: DHCP server 1 failure failing over to second server"
psexec \\winserver-1 net stop dhcpserver
psexec \\winserver-2 net start dhcpserver
goto All_Done


Now you setup more complex responses to not being able to get something as simple as version information, but you can do this with almost anything that you can get an output from, and I have some nice ones for monitoring servers just using simple scripts.

My hope is that after reading this you will thing of another three or more services that you can do something smiler to and now you won't have to fix it in the night you can wait till morning.

Tuesday, 24 March 2009

System.Web.HttpException: Maximum request length exceeded

This problem occurs because the default value for the maxRequestLength parameter in the section of the Machine.config file is 4096 (4 megabytes). As a result, files that are larger than this value are not uploaded by default.
This will also affect exports if you are extracting to excel say or in fact any attachment type if the file is larger than the default it will fail.

In the Machine.config file, change the maxRequestLength attribute of the

configuration section to a larger value. This change affects the whole computer.

the second option if you don't want to change the value for the server you can change it on one site by modifiying the Web.config file, this will override the value of maxRequestLength for the application.

For example, the following entry in Web.config allows files that are less than or equal to 8 megabytes (MB) to be uploaded < httpRuntime maxRequestLength="8192" />

the max is 1Gb or 1048578 in .NET 1.0 and 1.1 the limit in .NET 2.0 is 2GB 2097151 I've not had reason to test .NET 3.0 but i'm sure it will be even larger.

Just insert the line after the system.web on the web.config file of the site you want to allow the larger files.

< system.web>

< httpRuntime maxRequestLength="1048576" />

< /system.web>

Then restart the site, note the site you don't need to restart all of the IIS to make this work.

Wednesday, 25 February 2009

Rate Limit and QoS

One of the biggest problems with WAN links to how to manage your traffic, should it be percentage based or rate limited?

Well percentage based is fine to a point, that is to say its fine but in a IP calls it could be a problem and some other real time services such a video.

Quick example 50% of your WAN link is reserved for IP calls by you QoS policy lets say... but if more than x number of users make a call the link will have too much traffic and calls will become fuzz to say the least.
So to over come this we are going to just allow 15 on our 1158kbps line with no more than 100kbps on each.

The following example shows a T1 (1536 kbps) link configured to permit RSVP reservation of up to 1158 kbps, but no more than 100 kbps for any given flow on interface serial 0/0. Fair queuing is configured with 15 queues to support those reserved flows, should they be required.

interface serial0/0
fair-queue 64 256 15
ip rsvp bandwidth 1158 100

Another way this can be done is between a host or range so that the quolity remains high for the links between

To enable a router to simulate receiving and forwarding Resource Reservation Protocol (RSVP) RESV messages, use the ip rsvp reservation global configuration command. To disable this feature, use the no form of this command.
ip rsvp reservation session-ip-address sender-ip-address {tcp | udp | ip-protocol} session-dport
sender-sport next-hop-ip-address next-hop-interface {ff | se | wf} {rate | load} bandwidth

The following example specifies the use of a Shared Explicit style of reservation and the controlled load service, with token buckets of 100 or 150 kbps and 60 or 65 kbps maximum queue depth:
ip rsvp reservation UDP 20 30 Et1 se load 100 60
ip rsvp reservation TCP 20 30 Et1 se load 150 65

The following example specifies the use of a Wild Card Filter style of reservation and the guaranteed bit rate service, with token buckets of 300 or 350 kbps and 60 or 65 kbps maximum queue depth:
ip rsvp reservation UDP 20 0 Et1 wf rate 300 60
ip rsvp reservation UDP 20 0 Et1 wf rate 350 65

Note that the Wild Card Filter does not admit the specification of the sender; it accepts all senders. This action is denoted by setting the source address and port to zero. If, in any filter style, the destination port is specified to be zero, RSVP does not permit the source port to be anything else; it understands that such protocols do not use ports or that the specification applies to all ports. This can can be a problem if other services are on the same range so best to define access lists to block all unwanted traffic.

Last but not least.
To reserve a strict priority queue for a set of Real-Time Transport Protocol (RTP) packet flows belonging to a range of User Datagram Protocol (UDP) destination ports, use the ip rtp priority interface configuration command. To disable the strict priority queue, use the no form of this command.
ip rtp priority starting-rtp-port-number port-number-range bandwidth

The following example first defines a CBWFQ configuration and then reserves a strict priority queue
with the following values: a starting RTP port number of 16384, a range of 16383 UDP ports, and a
maximum bandwidth of 40 kbps:

! The following commands define a class map:
class-map class1
match access-group 101

! The following commands create and attach a policy map:
policy-map policy1
class class1
bandwidth 3000
queue-limit 30
random-detect precedence 0 32 256 100

interface Serial1
service-policy output policy1
! The following command reserves a strict priority queue:
ip rtp priority 16384 16383 40

Defining what is best for you or even using all of these rate limits and QoS is something that will be up to you... but remember not to use too many of them as other wise you will end up with a lines that are never fully used as all the policy's prevent it.

Good rule of thumb keep the policy's simple.

Friday, 20 February 2009

DMZ for Legacy applications?

The majority of enterprises have the habit of forgetting one or more Legacy applications are running out of date software and perhaps even unsupported version and due to design reason can’t be upgraded.

By design I mean ether the program was poorly written and won’t run even in compatibility mode on later versions as the Developer didn’t use coding standards.

Trust me when I say 90% of the time developers seem blissfully unaware there is even such a thing as standard practice to developing.

Or the software house no longer exists, the reasons are numerous but the outcome is the same you have a hole in your security.

Out of data version of software make you vulnerable to code exploits, DoS and other well known attacks on these applications.

Remember that you have just as many security risks in your company as outside of it.

So to better secure your application and avoid security breaches or DoS attacks place the older application into a DMZ in the same way you would with web server or email server so that you can control the traffic that is going to and from them. (Don’t put them in the same DMZ as your public facing servers such as web and email or the network administrator from hell will eat your soul!!!) ok he won't but I needed to make it clear to you. Put them in a separate DMZ for internal use only.

Remember that limiting the ports and destinations of the traffic will make it far more secure; it is also good practice to limit the way traffic flows on your LAN, where possible place all application servers into a DMZ or at least limit on the switches or VLAN’s the traffic flowing between them.

OK enough theory now for real life example… company has old in house application that runs the report for managers on projected sales nothing special in that but its running of a visual basic application with a SQL backend, so far nothing special however the SQL server is version 7.0 that is no longer supported or patched, the developer made some coding in the database that stops the reports from working in later version and the developer no longer works for the company so we need to keep it for now.

Using well known exploits I when from having a user account with limited access to SA access in just under 20minutes (thanks to Google) no deep SQL knowledge needed just some light reading, just type the version and word exploit and your halfway done.

After some meetings it was shown that if you had only the visual basic application accessing SQL on what is well known ports we could prevent 98% or the attacks, still not built proof but much better than a before.

So remember DMZ’s are not just for public facing services, as half the security risk in working on your network.