Monday, 12 December 2016

Windows Cluster 2016 Without Active Directory

You can have a cluster without a domain is not something that is common however if you need to put a cluster into a DMZ lets say and you don't want to expose any domain credentials or cause a denial of service by constant wrong passwords against a user then this could be the solution you are looking for.

Before we start there are a few things you should do
Create an account that can be used to sync the services and this should be a member of the Administrators group and import the PowerShell modules we will be using.

First the user creation, you will need to run this on each server
net user /add ClusterAdmin Super!SecurePa22Word
net localgroup administrators ClusterAdmin /add

Naming servers is something that you should consider in my case that was CL for cluster and node1-2 as names like WIN-LNF6MLM119B are kind of hard to remember later on.

Renaming the server via PowerShell and restarting is easy.
Rename-Computer -NewName "CL-NODE1"  -Restart
Rename-Computer -NewName "CL-NODE2"  -Restart

If you wanted to do this remotely then use something like this.
Rename-Computer -ComputerName "WIN-LNF6MLM119B" -NewName "CL-NODE1" -LocalCredential -Restart

Just remember you will need Enable-PSRemoting enabled first.

Next, we have to change the local policy on the servers to allow a non-active directory cluster to be created
new-itemproperty -path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -Name LocalAccountTokenFilterPolicy -Value 1

Now that is done we can proceed with creating the cluster, now I recommend that you check the shared disks and other things you plan to use in your cluster before starting.

new-cluster -Name <clustername> -Node <servername> -AdministrativeAccessPoint DNS

new-cluster -Name MySQLCluster -Node CL-NODE1,CL-NODE2 -AdministrativeAccessPoint DNS

after passing this command you will have one of three outputs a failure, and i recommend you recheck your steps, a cluster message telling you it's done or a cluster setup with some warnings, this could be missing best practices and worth fixing.



Tuesday, 6 December 2016

SQL Server on Ubuntu Server First Look

I have to say SQL server as always been one of Microsoft better products and seeing it make the transition to Linux can only be a good thing.

However, at the same time, I am little disappointed that the current build has such large limitations even for a public preview.

There is no working SQL Management Studio for Linux so all command are either by a windows PC over the network or SQLCMD, SQL Agent services doesn't yet work and even always on groups are not yet available.

That said you can see that the framework is there and even the Active Directory authentication is almost working, however, you will get an error if you try to add a user.

The install process is simple enough with just adding the repository and then making sure you SA password is complex enough.

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list


sudo apt-get update && sudo apt-get install -y mssql-server

sudo /opt/mssql/bin/sqlservr-setup


Once you have your server up you'll need some tools unless you plan to manage it over the network using SQL Management Studio

Installing BCP and SQLCMD is also a quick and painless activity.

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.list

sudo apt-get update && sudo apt-get install mssql-tools

There is also a docker package available and if you are using docker in your environment already this is a perfect way to go, or even if you are just testing for development uses.

sudo docker run –e 'ACCEPT_EULA=Y' –e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433 -d microsoft/mssql-server-linux

Sadly I was left with the feeling it will be many more months before a fully working version will be released and that is a shame given the hype that was put into this by Microsoft.

Monday, 5 December 2016

Using PSEXEC and Batch to remotely patch servers

I have written before about the power of using PSEXEC to patch servers and run a query against them however nothing is more powerful then when you use PSEXEC combination with batch scripts.

So today I'm going to show you how to patch all your servers, first I'm going to assume you have a list of servers and that you are going to run PSEXEC against them.

for /f %i in (c:\list1.txt) do psexec -c -d \\%i c:\batchfile.bat

Seems easy so far right? now in that batch file it's going to have to find out if the server is x86 or x64 it can do this very quickly and easily using the if "%PROCESSOR_ARCHITECTURE%"=="AMD64" this returns a true or false statement value because either you have x64 or you don't and since we don't have 128bit servers yet we won't have to worry about a third option just yet.

So this is what our batch file might look like using else to specify the response if the server is not x64

net use X: \\server\share\
@echo off 
setlocal 
set PATHTOFIXES=x:\update 


if "%PROCESSOR_ARCHITECTURE%"=="AMD64" GOTO X64 else GOTO X86
:X64
:X86
:END


%PATHTOFIXES%\SQLServer2014SP2-KB3171021-x64-ENU.exe /quiet /norestart 

goto END

%PATHTOFIXES%\SQLServer2014SP2-KB3171021-x86-ENU.exe /quiet /norestart 

goto END

net use x: /d


Now you might be thinking this is great and can now use this to patch my 32bit and 64bit windows, and you'd be right you can, however, since more than 90 percent of use have to work with more than one version of windows you'll quickly realise this solves only half the problem as how do you patch windows 2008 and 2012 in the same file right?

Well not to worry we have a way around that as well, all we have to do is find the OS version and then knowing what that version is pass it to the correct line in the batch file.
For version numbers, you can get this from the Microsoft pages https://msdn.microsoft.com/en-us/library/ms724832(VS.85).aspx

So here is a simple example, I know that version 6.3 is Windows 2012R2 so I can run that and get either a yes or no value if yes do this if not continue.

ver | findstr /i "6\.3\." > nul
if %ERRORLEVEL% EQU 0 (
GOTO W2K12R2 )

this works great but can lead to really long scripts when using more than two or three OS version, as you can imagine that's a lot of typing for simple get version.  So a quicker version is to create one check that can run against all the version,

echo off
for /f "tokens=4-5 delims=. " %%i in ('ver') do set VERSION=%%i.%%j
if "%version%" == "10.0" echo Windows Server 2016
if "%version%" == "6.3" echo Windows Windows 2012R2
if "%version%" == "6.2" echo Windows Windows 2012
if "%version%" == "6.1" echo Windows Windows 2008R2
if "%version%" == "6.0" echo Windows Windows 2008

So now we can determine the windows version we can use that without is true or false x64 statement and create simple patching, I won't lie to you this will still be a big batch file however you can make it easy to read by filling up the empty space with comments.

Remember that for every OS you will have two version x86 and x64 so the more version of windows the bigger the batch file will be.

net use X: \\server\share\
@echo off 
setlocal 
set PATHTOFIXES=x:\update 

for /f "tokens=4-5 delims=. " %%i in ('ver') do set VERSION=%%i.%%j
if "%version%" == "10.0" GOTO W2K16
if "%version%" == "6.3" GOTO W2K12R2
if "%version%" == "6.2" GOTO W2K12
if "%version%" == "6.1" GOTO W2K8R2
if "%version%" == "6.0" GOTO W2K8

#WINDOWS 2008 PATCHING GOES HERE
:W2K8
if "%PROCESSOR_ARCHITECTURE%"=="AMD64" GOTO W2K8X64 else GOTO W2K8X86

#Patches for windows 2008 x64
:W2K8X64
%PATHTOFIXES%\Windows2008-KB######-x64-LLL.exe /quiet /norestart 
GOTO END

#Patches for windows 2008 x86
:W2K8X86
%PATHTOFIXES%\Windows2008-KB######-x86-LLL.exe /quiet /norestart 
GOTO END


#WINDOWS 2008R2 PATCHING GOES HERE
:W2K8R2
if "%PROCESSOR_ARCHITECTURE%"=="AMD64" GOTO W2K8R2X64 else GOTO W2K8R2X86

#Patches for windows 2008R2 x64
:W2K8R2X64
%PATHTOFIXES%\Windows2008R2-KB######-x64-LLL.exe /quiet /norestart 
GOTO END

#Patches for windows 2008R2 x86
:W2K8R2X86
%PATHTOFIXES%\Windows2008R2-KB######-x86-LLL.exe /quiet /norestart 
GOTO END


#WINDOWS 20012 PATCHING GOES HERE
:W2K12
if "%PROCESSOR_ARCHITECTURE%"=="AMD64" GOTO W2K12X64 else GOTO W2K12X86

#Patches for windows 2012 x64
:W2K12X64
%PATHTOFIXES%\Windows2012-KB######-x64-LLL.exe /quiet /norestart 
GOTO END

#Patches for windows 2012 x86
:W2K12X86
%PATHTOFIXES%\Windows2012-KB######-x86-LLL.exe /quiet /norestart 
GOTO END


#WINDOWS 20012R2 PATCHING GOES HERE
:W2K12R2
if "%PROCESSOR_ARCHITECTURE%"=="AMD64" GOTO W2K12R2X64 else GOTO W2K12R2X86

#Patches for windows 2012R2 x64
:W2K12R2X64
%PATHTOFIXES%\Windows2012R2-KB######-x64-LLL.exe /quiet /norestart 
GOTO END

#Patches for windows 2012R2 x86
:W2K12R2X86
%PATHTOFIXES%\Windows2012R2-KB######-x86-LLL.exe /quiet /norestart 
GOTO END


:W2K16
echo OS = Windows 2016 I don't have patches for that
if "%PROCESSOR_ARCHITECTURE%"=="AMD64" GOTO W2K16X64 else GOTO W2K16X86

:W2K16X64
echo I wish I had patches for that :)
GOTO END

:W2K16X86
echo still don't have patches for that
GOTO END

:END
net use x: /d

As you can see that can be quite large and that's without putting each and every patch that you'd need to add to the list, however if you are doing this once a month adding the patch names to this batch file is going to be allot easier than creating lists by OS and then lists by processor architecture.

So how could we improve on this? well how about having a dynamic list of patches that will get created every time the batch file runs, how this can work as long as you maintain a folder structure for the patches for example \\server\share\windows2012\x64 and all the x64 patches are under that folder.

We could use a dir /b *.exe command to grab all the exe files and run them like so.
chdir /d x:\windows2012\x64
dir /b *.exe >c:\install.txt
for /f %i in (c:\install.txt) do %i /quiet /norestart 
del c:\install.txt

The result of this would be four lines per option, however, you would not need to change the batch file only add the downloaded patches to the folders on the share.

Saturday, 3 December 2016

Joining Ubuntu Server to Active Directory

Adding an Ubuntu server to your Active Directory is perhaps one of the most interesting things these days, as the partnership with Microsoft grows.

So I'm going to walk you throw the steps needed to get you connected.

Step One Basic Connectivity

First, of we need to make sure you can resolve the domain

sudo nano /etc/network/interfaces

In my lab domain, the server addresses are 172.16.1.6 and 172.16.1.16 so I changed the config of the to read as below.

nameservers 172.16.1.6 172.16.1.16

After saving the file I pinged the FQDN of a server in the domain to see if the name was resolved.
ping dom.lab.local

Since the name was resolved I moved onto the next step.

If you are using DHCP assigned address you might want to check out my other post as I address one common issue there before you continue here Ubuntu DNS Host Resolution Issue.

Step Two Installing Packages

Next, we are going to need four packages

  • NTP - Network Time Protocol
  • SSSD - System Security Services Daemon
  • Samba - Open Source/Free Software suite that provides seamless file and print services to SMB/CIFS clients
  • krb5 - Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications
If you find this too much reading you can alway follow along with youtube video's



From the terminal window run the following command to install all four packages.
sudo apt install krb5-user samba sssd ntp

Step Three Configuring Kerberos

Now we are going to need to configure them, first up is the Kerberos, you have most likely been asked for the name of the domain during the package install, however, you will need to add few more lines.

sudo nano /etc/krb5.conf

Below is an example of what is in my configuration.
[libdefaults]

default_realm = LAB.LOCAL
ticket_lifetime = 24h
renew_lifetime = 7d

[realms]
LAB.LOCAL = {
kdc = DOM.LAB.LOCAL
 kdc = DOM2.LAB.LOCAL
admin_server = DOM.LAB.LOCAL
}

Step Four Configuring NTP

Configure Time so that all computer account and packets sent in Kerberos are not timed out due to time mismatch between servers.
sudo nano /etc/ntp.conf

Simply add one new line with your time server
server dom.lab.local

Step Five Configuring Samba

Next, up we are going to edit samba
sudo nano /etc/samba/smb.conf

Nothing too hard here, however, you will need to add few more lines to the global config.

[global]

workgroup = LAB
client signing = yes
client use spnego = yes
kerberos method = secrets and keytab
realm = LAB.LOCAL
security = ads

Step Six Configuring SSSD

Configure SSSD is both the hardest and the easiest at the same time, as there is no template file provided by the package you'll have to create a new one.

sudo nano /etc/sssd/sssd.conf

You'll need to use the at least the config below, please note I use simple because of nested groups.

[sssd]
services = nss, pam
config_file_version = 2
domains = LAB.LOCAL

[domain/LAB.LOCAL]
id_provider = ad
access_provider = simple

# Note that this config only allows 2 users and 2 groups to gain access.
# simple_allow_users = joker@lab.local,chrissy@lab.local
# simple_allow_groups = linux-admin,linux-users

# Use this if users are being logged in at /.
# This example specifies /home/DOMAIN-FQDN/user as $HOME.  Use with pam_mkhomedir.so
override_homedir = /home/%d/%u


After closing and saving the file do not forget to set the permissions.
sudo chown root:root /etc/sssd/sssd.conf
sudo chmod 600 /etc/sssd/sssd.conf

Step Seven Updating Localhost File

last but not least don't forget to update the local host file with the IP and FQDN of the server that it's about to be.

sudo nano /etc/hosts

127.0.0.1 myserver.lab.local myserver
172.16.1.8 myserver.lab.local myserver


Step Eight Restarting Services and Joining Domain.

Finally, you'll need to restart the services to take effect and also tell Linux what account to use for joining the domain,


sudo systemctl restart ntp.service
sudo systemctl restart smbd.service nmbd.service

Select the user to join the domain with
sudo kinit Administrator

Join the domain
sudo net ads join -k
sudo systemctl start sssd.service

If you get errors during the join check your config and rerun the net ads join command.

I've also taken the time to upload example files to save you a bit of time in the attached link.


Tuesday, 29 November 2016

Common SQL Agent Misunderstanding

Given this is a common misunderstanding lets look at why.
First when a SQL Job runs as a person that is not part of the SysAdmin you will see executed as and then the user in our case that LAB\Joker, if on the other hand it was run as SysAdmin you would see the name of the account SQL Engine is running under.

So let's stay with our LAB\Joker for the moment, what does it mean when we say executed as.
What it means is that the T-SQL is run with the same permissions as that user as in SQL, so if he has database owner rights then it will have database owner rights.

However once the job steps outside of the SQL server, for example grabbing a file from a network share, you are no longer inside SQL.


So if you are outside of SQL when getting the file from the network share what account is SQL using to get the file ?  The answer is simple it's using the SQL Agent account.

Now if you SQL Agent service is running under server\localsystem it means unless the server$ is part of the share it will not have access or you could add everyone to having permission but I do not recommend using the everyone on any share ever.

Since computer accounts are hidden in windows you will need to add the $ sign or it will not show up.

The better option is to create a domain service account for the SQL agent to run under and grant that accounts access to the share.

Out example job had the following dummy data
10,Field2,Field3,Field4
15,Field2,Field3,Field4
46,Field2,Field3,Field4
58,Field2,Field3,Field4

And use the follow T-SQL statement to do your own import,
--Create table
CREATE TABLE myimporttable (
   Col1 smallint,
   Col2 nvarchar(50),
   Col3 nvarchar(50),
   Col4 nvarchar(50)
   );
GO
--Begin import
BULK INSERT myimporttable 
  FROM '\\dom\Share1\import.txt' --Over Network
  WITH (
  FIELDTERMINATOR = ',',  --CSV field delimiter
   ROWTERMINATOR = '\n',   --Use to shift the control to next row
   TABLOCK
)
GO
--Check content
SELECT * FROM myimporttable ;
GO
--Clean Up
Drop table myimporttable 

I recommend trying the permission a few times till its clear in your head as this can be complex subject as the permissions are layered.

If this was all too much theory you can also follow along on youtube where I have a practical example.

Wednesday, 26 October 2016

Best WiFi setup

OK I don't want to be mean but I can't take the crazy talks about wifi posting, so here are some facts and debunked some of those myths floating around.

Number 1 better router equal better signal.

Wifi is like radio better signal needs to take place at both ends aka getting a better wifi router or access point helps but only so much.

Example lets say that you have 2 antenna's on your old router and you have 4 on the new one it will have a stronger signal and be able to pick up packets from the wifi guest better.  And now the question how much better? Well since I just happen to have two 802.11ac rated router's I did some testing than the answer is at close range ie five meters there is no difference with a steady 148mbps out of my 150mbps internet, however, once we put walls and range into the mix it starts to show.
In our benchmark, we placed them 13m and two non-supporting walls for our 2 antenna's we now have a stable 11mbps vs our 4 antenna model with 48mbps.

So we did the same test with 6 antenna router, that must be better right? well didn't see much of it.
the test showed about 51.5mbps and this was almost across the board with wifi so not much better for another 150 Euro router.

As we can see the range and the walls take a heavy hit on the performance, however, this was tested with a single antenna client what happens when I change the number of antenna of the clients?

Woohoo 4 antenna client on a 4 antenna router got 138mbps so as we can see it helps if both ends can be upgraded however if you have smart phones or tablets that can't have more antenna's fitted this won't help you, so keep in mind if you have an old router with one or two antenna's replacing it will help but if you have 4 or more you won't see the benefit as much as adding to the client and the client upgrade was less than 50 euro.

So the magic word here is MIMO 2x2 or better 4x4 if you can get it on the clients.

Number 2 new routers will help with SSID congestion.

No, they won't but what they do have is allot of smart features like finding the least congested channel, but that might not help, remember that it's checking for empty channels where it is placed not where your PC is so unless they are close by sometimes this won't help.



Example, lets say close to your router there are only three SSID on channel 2 and 6 so it picks 11 as being free, however, close to your PC there are five SSID and they are channel 9 and 11 mostly this means the signal your router will get from the PC will be very weak and being on channel 2 or 6 would, in fact, be better.

In these case's it's good to look at where you plan to have the PC and download to your smart phone something like WiFi analyser to check what is in use there vs where you plan to place the router.

In my case, all the channels are in use but the least used one at both ends was channel 10 with only one other SSID on it.

Number 3 Christmas trees and tinfoil.

This one is the best, placing your Christmas tree with lots of tinsel between your PC and the WiFi Router will impact the performance but honestly, if you are placing a tree next to your WiFi you are either living in a place so small it won't impact you or you just are being silly.
In the real world, we tested this and the effect at close range was less than 1mbps placing the tree next to the router and 10mbps at range, we also tested with the Christmas lights and well no change until was tried the scary lights that are hanging together with patched wires then we did seem some problems but frankly you shouldn't use those as they are a fire hazard and you might want to think about replacing your lights.

Now the tinfoil suggestion, some people have suggested putting tinfoil behind your wifi router so that signals from your neighbours that interfere are reduced signal strength, well sounds at first like a good idea but very quickly become impractical as without knowing exactly where the neighbours WiFi is placed and how many overlapping signals you have this might only work out if you cover all the walls with tinfoil and by the way if you do that it makes your WiFi crazy as its gets echoes from itself.  So I really don't recommend this.


If your WiFi looks like this go 5Ghz trust me there will be fewer SSID on it, as fighting over 2.4Ghz is over.



Number 4 the right settings.

Having a dual band WiFi does help as there are more channel than a single band, but most of us aren't getting the best out of it because of the way drivers work.

Example lets say you set your dual band WiFi router to 802.11ac only you will get some great performance, however, most of us have that one device that still can't go over anything better than 802.11n so we set it to mixed and the result is the better devices lost performance.
in some phones and tablets, this can be changed to use 5Ghz only and this does help.
For Windows, you can change this under the device manager and exploring the Network adapters properties where you can change Wireless mode to IEEE 802.11a/n/ac


To give you a real world example how much this helps when defaulted to auto sometimes the connection was 80mbps other times 40mbps when manually set to IEEE 802.11a/n/ac the speed was 138mbps more than 170% better for a one minute tweak.

Another tip is to turn off "allow the computer to turn off this device to save power" as that can have some strange disconnect from time to time.

These tweaks can also be done in Linux, however you'll have to check with the manual or support from the vendor how best to do this as there are to many to list here.

Number 5 Things to avoid.

There is a big list of things you should not put between your WiFi router and WiFi clients for the best signal here is a short list.

Kitchens and Bathrooms, since unshielded electrical appliances like hair dryers, food mixers and microwaves can cause disruption.



Other fun objects like fish tanks as the signal will have to travel in water, walls not just because it's a hard object but many walls have metal frames that reflect the signal.

Fans and other electric motors, this includes toy cars, drones and baby monitors.

Number 6 Things to do.

Let's make the assumption you live in a big house or flat, have more than one access point, don't use extenders as they will cut the bandwidth by 50% so a cable between them is best.

If you have a good smartphone basically anything since 2014 it should support dual band so use wifi app to see if the 5Ghz are in use if not then you might be the first and have good speed on dual band as 2.4Ghz is saturated these days, if you have no choice then at least pick channel with the least number of SSID running on it.

Upgrade your devices with better wifi as it's not always the routers fault.

Check from time to time if new SSID have appeared and what channel they are running, sometimes you might need to change the channel as one person moves out and another comes and uses another channel.

If you can place the router close to the most common area you are going to use then do it, as the best way to fight signal loss is staying close to the source.

Summary 
Hopefully, this has dispelled some of the faster WiFi issues and stopped some of you buying a new router for no reason.

If on the other hand, you are not running 802.11n or 802.11ac dual band then you should be, these are not expensive as most new routers with this are under 100 euro and some even under 50 euro.

Friday, 21 October 2016

Batch and Bash script

Scripting simple batch and bash script is by far the most time-saving thing any administrator can do.
These can also be useful to anyone that needs to repeat a task with only simple values change.

For example, let's say I want to start ten tomcat server at once on my desktop for testing.

Bash
for i in {80..90}; do docker run -d --name tomcat$i -p 80$i:8080 tomcat8.5/example; done

Batch
for /l %i in (80,1,90) do docker run -d --name tomcat%i -p 80%i:8080 tomcat8.5/example

I would now have ten tomcats named tomcat80 to tomcat90 with ports mapped 8080 to 8090 easy to remember and easy to create, this could be lxd or hyper-v guest, it is really down to if there is a batch or bash command for them.

It's also possible I could have used more than one for in the command line and had the port numbers and names created separately but for the moment this just an example, what I would like to focus on is how windows and Linux have some small changes between them even tho the code above does exactly the same thing.

So let's do another example, this time I'm going to ping a subnet and see what answers.

Bash
for i in {1..254}; do ping -c 1 192.168.0.$i |grep ttl; done >range.txt

Batch
for /L %i in (1,1,254) do ping -n 1 192.168.0.%i |find /i "ttl" >range.txt

So now we have done two example working with number how about with files, this could be a list of servers or just list of names.

Bash
for i in $(cat serverlist1.txt); do echo $i; done

Batch
for /f %i in (serverlist1.txt) do echo %i

What you might have noticed already is that while Bash doesn't care about the type and uses brackets and dollar signs to tell the type of data batch uses the backslash switch for file vs number ranges and no switch for just data.


Bash
for i in Mo Tu We Th Fr; do echo day$i; done

Batch
for %i in (Mo Tu We Th Fr) do echo %i

Now that you have the basics of the for command you can create loops on files ranges and text with easier, how you apply that to your work can be anything from running commands on more than one server or patching, checking up time memory usage available space the list goes on.

Setting up ssh on more than one server
ssh-keygen && for host in $(cat hosts.txt); do ssh-copy-id $host; done

For running commands on more than one server, however I recommend Ansible for farms
for host in $(cat hosts.txt); do ssh "$host" "$command" >"output.$host"; done

Patching windows servers
for /f %i in (c:\serverlist1.txt) do psexec -c -d \\%i Win2008R2SP1.exe /quiet /norestart /overwriteoem


Installing features

for /f %i in (c:\serverlist1.txt) do psexec -c \\%i ServerManagerCmd.exe -install Application-Server Hyper-V WAS -restart

Just to be clear yes I am using psexec in my examples and I have said you should migrate to Powershell before however this is a Batch vs Bash not Bash vs Powershell example.

Tuesday, 11 October 2016

Why Firewall a Server

I'm going to address something that came up in a talk I had the other day with some people that run data centres while they are putting firewalls between customers and what is exposed to the internet however not against traffic from one customer server to another.

When questioned on the subject the response I got is nothing can get in or out so it's secure, and it reduces administrative overhead.



Well, No, and I had to point out two things first if an infected client passes something to the server it's not secure anymore and such example of zero-day exploits are many, second if one server is compromised it allows hackers, virus and malware to spread faster when others nearby servers are not protected.  Finally the administrative overhead? that's a two-minute update to the provisioning script people nothing more.

In short, there is no real reason not to have a local firewall, both Linux and windows offer their own versions that can be easily customised to allow monitoring and remote access to trusted hosts during provisioning.

Now some of you are thinking well this is what happens with small cloud providers right?
Well you'd be wrong the people in question I am talking about are blue chip IT firms and household names, you see one of the reasons for this is that in these larger companies people doing the provisioning automation do not have any security training or any process in place for hardening, leaving this almost all down to the end customers that most of the time don't have the skills.

Do I think this is the right approach, well no and frankly this might be ok in an IaaS model but for a PaaS, this is something of a detail that is overlooked and leaves their customers exposed?

What is still more worrying is that many of them do note have a patching process either leaving you more exposed to over time and in my mind an even greater need for a firewall on the server.

Now I know that video below is only about AWS but please keep in mind this could happen to any cloud, and covers more details on styles of attack.
https://www.youtube.com/watch?v=dPSEjegiUCE&list=WL&index=13

Monday, 3 October 2016

Change Management

When is a change not a change ? Yes, this is a trick question, you see the sun coming up in the morning and going down in the evening is a change of state, however, it's one we expect, this is sometimes overlooked by change manager when they want a change request for what is expected behaviour like a move of a virtual server from one server in the farm to another.

Let us look at types of change first of all and compare them to real world examples.

Retrospective
Emergency
Normal

The retrospective should be the least used and really should only be used when a change was done to resolve a critical incident, a good example of this might be a software patch or firewall change to block an attack that was taking place.

The emergency change is used more than it should in my opinion, and should always be reviewed why it took place, some examples of this could be a zero-day exploit that you want to patch quickly, another could be last minute request from a business unit like a code change for a sales promotion.
In most if not all you have to ask if this could not have been foreseen and better planned.

The normal change is the one where you had a chance to tick all the boxes and should be most comfortable approving.

The questions that should have been asked in both normal and emergency change are as follows.
  • Has the change been tested?
  • Does the change affect other things, aka disaster recovery and service overview documents?
  • Other applications connected with it.
  • Have stakeholders from those affected applications taken place.
Next on the list is rollback plan.
  • Is there a rollback plan, if not then why not.
  • When is the rollback to take place aka the defined set of things that have or have not happened to trigger the rollback, and the expected time for the rollback to take place?
All of the information above should be available before the change advisory board review the change.
On the other hand when a server fails and as a result service fails over to the secondary this is not a change... this is expected behaviour, I have seen change managers ask for a change request for failovers and I have, to be honest, I've laughed at them.
When the primary is restored and we want to failover back to it "yes" that is a change... if something breaks and we have to change a setting to fix it "retrospective change" because you already have an incident.
However, we have to be clear, incident, in this case, needs to be service interrupting otherwise it can wait of emergency change, perhaps good example of this would be an overnight job is running and it will not finish in time and you need to change the import parameter to make it run faster, this is not yet a service impact, however, does require some urgency hence emergency if, on the other hand, this was going to finish this time but the trend is that in few weeks it will not complete in time as the jobs are getting slower that would be a Normal change.
Hopefully, this helps to understand when change management is used and when it can be informed after during impacting incidents.

Post Change Checks Automation

Checking changes, ever had one of those changes that should be simple then after something wasn't working and took hours to track down.

like when one server in the farm is not running because someone forgot to start it, or network subnet was wrong on a firewall change so some things work and others don't ?

Well if you have don't worry, your not alone, now if you've invested some time in good monitoring you might be able to check for those things quickly, or perhaps you could just add to the change process a post change check.

Today I'm going to show the benefits of scripting some post change checks.
like is the network connection ok, are is the application running, etc

Part one is the network ok.
There are normally a few things to check on the network level

1) DNS - this might not be important to you if the application server uses only IP resolution, however, I like to use names as it makes network changes more dynamic.

2) Packets/Ports - of you have ping that will tell you some basic network connection however if there is a firewall you need to know if the port the application is communicating with is open.

3) Are common services available, can you reach NTP, DNS, LDAP/Active Directory and Databases.

This can be done with batch script, for the most part, however, there are some limits on windows that you can't check if the ports are open, however, you can check most things, for example here is one to check that your local internet is working.

@echo off
cls
ping -n 1 192.168.0.1 | find "TTL"
if not errorlevel 1 set error=ok
if errorlevel 1 set error=fail
nslookup www.google.com | find "Addresses"
if not errorlevel 1 set error1=ok
if errorlevel 1 set error1=fail
ping -n 1 8.8.8.8 | find "TTL"
if not errorlevel 1 set error2=ok
if errorlevel 1 set error2=fail
nslookup www.google.com 8.8.8.8 | find "Addresses"
if not errorlevel 1 set error3=ok
if errorlevel 1 set error3=fail
cls
echo Result: Local connection %error%
echo Result: Local DNS %error1%
echo Result: Remote connection %error2%
echo Result: Remote DNS %error3%

One of the most common issues is when your ISP has DNS servers failing so you can see that not only do I check DNS on the router but I then check the result against Googles Open DNS server, proving local and remote connectivity.

If you have Windows 8 or Windows 2012 and higher you can use PowerShell Test-NetConnection this can check if ports are open, unlike batch without needing third party tools.

#check connection to dns
Test-NetConnection -ComputerName 8.8.8.8 -Port 53 -InformationLevel Detailed | Select-Object RemotePort, TcpTestSucceeded
#http lookup
Test-NetConnection -ComputerName www.yahoo.com -CommonTCPPort HTTP -InformationLevel Detailed | Select-Object RemotePort, TcpTestSucceeded
#dns lookup
Resolve-DnsName home.com -Server 8.8.8.8 –Type A | Select-Object IPAddress
#check running service
Get-Service -Name "vss" -ComputerName "localhost"
#check service account user is not locked out, and connection to active directory
Get-ADUser IIS_ServiceAccount -Properties * | Select-Object LockedOut

With Linux this can be done much easier using NetCat or Nmap to get the results, these can also be used with Windows, however, Nmap needs a reboot so I'd recommend using NetCat if you have the choice.

Now obviously the list of checks need to be custom to your needs however with these simple example hopefully, you will be able to create some quick post change checks.

For some of you, this will include message queues and status of jobs however, for the most part, you'll already have some monitoring to help you with this.

Sunday, 11 September 2016

Capacity Management

Is its possible that you can have a second network on your existing hardware for free?
most likely yes.

Looking withing most network you will find servers that are under utilized such as file and print servers, and these make ideal servers for building a second network.

If you haven't started the migration to virtual servers yet!
It because you are still running older operating systems as nearly all new ones have this as an option.
Where is the benefit in this...? well the real benefit is that you can put all that hardware to a good use example.

In the past you will have had server that are loaded maybe 10% CPU and other that are 60% or more now this unequal loads was directly related to the task they where performing such as running application and/or file servers.

By having multiple server running on one physical host we can make full use of the resource and even share the load of other more heavily utilized servers, by adding another virtual server to the farm.

OK before we get into the many ways you can improve your network by doing this lest look at some prerequisites, number one you need to have enough physical resources to support this and as the base operating systems is recommended to be a clean system to avoid stability issues that means you will have to factor a in one more operating system (no mater what they say its still going to use some I/O and RAM)

The largest overhead in visualization is I/O and RAM with most quad core systems now the CPU can handle the load very well but after using the system for a while the issue of I/O is often the first to come up.

Disk orations are always an issue with large amounts of data and bottle neck is the area are more common these days that before, remember that virtual server are using virtual devices not physical ones so its always best to get you counters from the base OS that is really interacting with the hardware.

Some basic figures where provided on Disk I/O related to SAN's by HP a few years ago, they stated that you should have one 4Gbps HBA connection for every 250GB of data on a highly used system, when visualizing your server you loading many operating systems and accessing data for all of them so depending on the nature of the system this can be very high I/O access, database servers and file servers use the highest I/O, while authentication server use the highest CPU.

So here is a example load for a virtual server, you have two servers and two operating systems might be the same OS with different applications but the general is that you'll end up with six servers, you have two base systems and four virtual servers, lets say you have SQL servers and Active Directory servers.

Place domain controller on server A with SQL server A
then place SQL server B and Domain controller B on server B this way you have mixed the I/O and CPU loads between the servers.

Now VMware resource manager does this very nicely and while Hyper-V can also do this its frankly not as polished as VMware so you might have to manually balance the resources.

But we not finished, what about resource spikes? what happens if the SQL server gets deadlocks and the CPU load goes up??? will our domain controller freeze... in short yes... because the most important thing is that you must setup some resource management on the base OS so that this can't happen.

I always like to run the virtual servers for a good weekend before setting the limits on the resource management so that I have some idea where to place it.
60 percent CPU for the Domain controller and 30 percent for the SQL server was one of the best one I had so far, the domain controller had some 90,000 users so you can imagine it was quite busy on the other hand the SQL server was not used for ETL jobs or OLAP is it was more I/O running reports and adhoc query's with very low CPU load.

The layering of systems allow for better usage, however its good to remember that failures happen to so if you have 5 servers layered like this you need a 6th in case something happens to one of the 5.
You can scale up like this as well, so for 10 servers 2 are for fail-over, also try to keep servers away from one another if you can, so if you have 12 servers try not to have more than two in the same blade enclosure, this way you avoid single enclosure killing the farm.

Keep in mind you should also write down the growth plan, when will you need to add servers and when would an application become too big for virtual server, this can become more interesting than it sounds as you have to go ask the application owned if that application can be farmed out over more servers or if a migration to larger hardware will be needed.

Ideally you should have those answer to hand and check them at least once a year that they haven't changed as with good planning you will know what the upper limit is and when the expected growth will come from.

SSH config for users

Let me ask you the question are you still using the ssh admin@host.com or more likely using port forwarding like this ssh -L 5900:localhost:5900 admin@host.com

If so, STOP right now just stop, there is a so much better way its call ssh config.
user's configuration file (~/.ssh/config)
system-wide configuration file (/etc/ssh/ssh_config)

For the moment since I'm not the sharing kind when it comes so connection details we'll focus on the user configuration file stored in ~/.ssh/config now if you've never used it before then the file doesn't exist so there are a number of ways to create it using vi or nano
But we'll use touch ~/.ssh/config

Now you should have an empty file, let's give some example what can be put there.

Host server1
     HostName server1.company.com
     User minecraft
     Port 4242
     IdentityFile ~/.ssh/server1.key


Host server2
     HostName 192.168.1.100
     User root
     IdentityFile ~/.ssh/server2.key

What this now lets you do is use just the ssh server1 or ssh server2, not only is that shorter and easier to remember it also means you don't need to remember switches like -p and -i and there is no reason to stop there more complex configurations can be used for matching domains and IP ranges.


Now here is a larger example using few kinds of setups both with ports and port forwarding and timeout settings.

### default for all ##
Host *
     ForwardAgent no
     ForwardX11 no
     ForwardX11Trusted yes
     User minecraft
     Port 22
     Protocol 2
     ServerAliveInterval 60
     ServerAliveCountMax 30

## override as per host ##
Host server1
     HostName server1.company.com
     User minecraft
     Port 4242
     IdentityFile /nfs/shared/users/nixcraft/keys/server1/id_rsa

## Home nas server ##
Host nas
     HostName 192.168.1.100
     User root
     IdentityFile ~/.ssh/nas01.key

## Login AWS Cloud ##
Host aws.apache
     HostName 10.20.3.4
     User wwwdata
     IdentityFile ~/.ssh/aws.apache.key


## Forward all local port 3128 traffic to port 3128 on the remote vps1.company.com server ##
## $ ssh -f -N  proxy ##
Host proxy
    HostName vps1.company.com
    User anonnymus
    IdentityFile ~/.ssh/vps1.key
    LocalForward 3128 127.0.0.1:3128

You can also use ranges on the hosts so that anything matching that range can be used
Host *.company.com
Host 192.168.0.?

You can also pre-configure connections via a server that has network access like a proxy or gateway to the network.

Host *.company.com
  User admin
  Port 4444
  IdentityFile ~/.ssh/aws.apache.key
  ForwardAgent yes
  ProxyCommand ssh accessable.company.com nc %h %p

Now for any server I ssh to that ends company.com will be forward first to the server call accessible then on to the target server, meaning that you don't have type very long ssh commands to reach a server.

Not that forwarding has to be enabled for the last example on the ssh server.

Friday, 9 September 2016

Installing Software Remotely Using Powershell

If you have Powershell on your network it might be time to make use of it, now I am not saying replace the psexec if you have been using it, as that is still a great tool, and on large networks with servers both with and without Powershell its sometimes the best way.

Nevertheless if your one of the lucky few with only a new network, this is how you can use PowerShell to do that nice installer thing I once showed on ps execs.

First you will need to make sure both the server and client should have powershell installed.
And psremoting enabled.

Start PowerShell as administrator and fire below command.

Enable-PSRemoting 

Now you might want to do this quickly using psexec

psexec \\[computer name] -u [admin account name] -p [admin account password] -h -d powershell.exe "enable-psremoting -force"

You can also replace "\\[computer name]" with an IP address, or even "@C:\[path]\list.txt to automatically enable psRemoting on a big list of computers
With PSRemoting enabled you can run scripts remotely on any computer that you wish.

For example:

Invoke-Command -Command {\\servershare\Softwares\Setup.exe /parameter:01 /parameter 2 } -computerName (Get-Content "c:\webservers.txt")

Or you can go one step more and create the list of computers dynamically using the active directory.

Invoke-Command -command { dir } -computerName ( Get-ADComputer -filter * -searchBase “ou=Web,dc=company,dc=pri” | Select-Object -expand Name )

This is where the power over psexec starts as you can use objects in active directory to determine where to install the software.

Virtualizing Active Directory

Along time ago I wrote that is was a good idea to have virtual active directory servers, as this is a very quick way to recover in a disaster recovery.

What I forgot to mention at the time are the things you need to think about to have in place for this to work.
For example, Microsoft doesn't like supporting you unless the platform is hyper-v however VMware will support you.

but sorry to say you have limited or no support on other platforms.

Also to avoid dirty writes and this is something I hope you have already done for your databases and application servers that are virtualized, to disable the write cache.
This should be less of an issue if you are using a SAN.

Last but not least do test the restores, create at least one isolated VLAN to restore active directory to so that you are sure current backup works, and you can do this at least once a month as finding out you have a corrupt active directory and can't restore it is a nightmare you don't want to ever have.
That said the benefits of being able to do restores quickly and being able to script even the disaster recovery tests make this it worth it.

As an example, a disaster recovery test used to take 6 hours for active directory restoring it and then being able to bring up applications.
With scripting and backups on the SAN, virtual tape library, it was now done with only a few commands in under 40minutes.

Dynamic code generation bad for apps?

While using HSQLDB, H2 and others let code be created dynamically without your JAVA developer needing to think hard about the database it does come with some overheads.

First, the query in now created by a program that doesn't understand the intended outcome and sometimes because of this will create repeated queries for the same set of information, this doesn't show up in small unit testing, however, can become a large performance bottleneck when dealing with many thousands of transactions.

Second, these programs do not use any performance best practices and can be hard to link the java query to the actual SQL statement that is executed on the Database layer.

Such things that are overlooked are:
  • Network traffic caused by long query statements.
  • Slow and N+1 query issue.

With this in mind does such development style help?  Well yes if you're writing a program that is small and has a very small database.
If on the other hand you're expecting it to grow to a larger size and will continue to do so over years to come, you might have just shot yourself in the foot, as debugging performance issues will become a nightmare, this is not to say that the caching functions are not useful, however do not rely on them to write good queries for you.

Wednesday, 7 September 2016

ssh keys how big should they be

I was asked recently how big should an ssh key be, the answer is simple as big as you can support.
The reason I say as big as you can support is not only the larger the key the harder it will be to break but also because you will most likely be limited by some device on your network that doesn't support larger than 4096bit keys.

For example, I try to run 8192bit everywhere I can and one of the places that I've found that I can't is phones, however, this is more of an app issue than the phone itself.

Some of you would ask why not a larger key like 16384bit while others would ask why larger than 2048.

Well, the answer is simple in both counts 2048bit is now standard for most systems meaning it will be the first one that people try to break, this doesn't make it any less secure however it does mean more people are trying to break it.

As for the 16384bit will apart from the overhead on the connection, depending on the speed of computer on each side it can make the connection unreliable and painful to use.

So I split the difference and when with 8192bit key, so far I can say the connections are stable and I've a good feeling about security, however, I still have another 2048bit key that I use for online sites that don't yet support 8192bit and I'm sad to say that doesn't look like it will change for at least another two years.

Slow Database

As we move to larger datasets we have improved processors, disk I/O and CPU, however, we are still held back mostly by our own code.

We know that full table scans are bad and we do our best to avoid them, most of the time, and if you have a good DBA he/she will find any that come up over time, but there is another thing that can slow down the data when a large number of queries are overlapping.

Locks
Now locks are a perfectly natural thing in a database for data consistency and that is a good thing, nevertheless, this makes sense when changing data it's unnecessary for retrieval of information such as selects that normally make up the bulk of database queries.

SELECT doesn't hold the exclusive lock on pages rather it sets shared lock(S) on the pages to read and other transactions can't modify the data while shared locks exist(but can read the data by placing another shared lock). So it is expected that your SELECT blocks any updates.

So let's paint an example, if you have a website with 100,000 users some 10,000 might be online at the same time so that is some overlapping selects as many of the users view or query the same information, now there are many caching and other smart things you can do at the middleware layer to reduce this however at some point the query will still get to the database and at that point you don't want them waiting because the row is locked by update or insert is running and the rows/table/pages are locked.

So the pain here is when an update and a select are running the select uses a shared lock but the update and insert use exclusive lock.

Now one option would be to use nolock on all of your select statements problem solved right ?
Well not really as now you have reads on incomplete data and also its bad habit to get into as once you start using them you might put one on a "insert/update/delete" statement and then you have a lovely corrupt database.
Also if you have a good DBA he/she will have flagged any statement with nolock in the same way they will flag select * as they are not best practice.

So what to do?  Well, the answer is more simple... drum roll, please... ISOLATION LEVEL now this gives you the more options to read data both committed or uncommitted depending on your need, MySQL, Oracle and Microsoft SQL all support Isolation Level, so you can now control what needs locks and what doesn't.

This is not only best practice but the way you should use your database.

Big Data Means Less Workers ?

Now we might all have heard about big data, but for some 90% of people, this means?
Well based on what I hear from people when I ask them people think DNA research or understanding space or NSA and CIA thanks to spying one the public.

Amazingly almost no one has heard of or even knows about IBM Watson  and that's a little bit of a shames as an example of what big data can do is never better than Doctor Watson.

Now Watson because has access to all the information both medical and drug-related has a higher than your GP success for diagnosis and treatment and think about it, he doesn't need sleep and learns all the current medical practices in real time so is never working without of data information.

So this I think is a great medical tool but let's explore this for a moment in other fields, Architecture, Electronics Design, Tax and Government workflows, IT Support, Clothing and much more.
All of these need many things to be known and change over time and because of the complexities are hard to master but with big data intelligence this is no longer an issue as all of that information can be on hand at once.

So let's give an example TDP is too high for your next model of laptop because engineer overlooked something, this results in a product recall with big data this could have been prevented.

With clothing the colour runs because of the kind of dye that is used and you need to set the label correctly, this could be done without checking by big data.

IT support and Government follows workflows that will get you the end result this could be done by big data without the need for humans.

Some of you are by now starting to think so what jobs are left as many of these things are done by humans would no longer be needed.  Well, it's not all bad news, first of all, computers still can't create something new so we need people to think of new things.
Second computers still can't interact with people very well so for that face to face time we need people.

What big data is good for is understanding complex things better and avoiding human mistakes that happen when things are overlooked.

Sunday, 3 January 2016

Bottlenecks, Bottlenecks everywhere.

Recently I decided to get a new PC, not feeling the need to hand over 3000 Euro for the top of the line model I decided to build my own and since its been some 10 years since I built my own some research was needed and along the way there was allot of things that seems to make no sense to me so this is why i am sharing this is because i believe you (the public) are being ripped off when it comes to hardware and here are some of the reasons.

CPU's that you don't need, RAM that you can't use and graphics you can't see.
If you continue to read you'll find out why.

Picking a CPU.
Pick one that matches your need.
example dual core or even quad core is more than enough for 90% of people 8 core might just be a waste of money unless you are a high end video editor or compiling programs or something that can really make use of more than 4 cores.

This is why Apple and others use the lower 4 core CPU's
Personally I have more desktop workload so I went with a higher end 8 core.

Also remember whatever CPU you pick determines your options with the motherboard so if you have need of PCIe 3.0 or M.2 this might limit the choices later.


DDR3 or DDR4 memory ?
Well DDR4 is better but can you really make use of it, here are some things to know.

First if you are rocking a AMD such as the FX 8 series is limited to 1866 mhz DDR3 so anything faster isn't going to change that the CPU won't go any faster same effects the A10 series is limited 2100 mhz DDR3 so you do get some benfit withe the A10.

Now I know all you Intel fan boys are going that's why Intel is better well not really Intel 6th gen i7 i5 and i3 supports DDR3 and DDR4 but DDR3 at 1866 mhz same as the AMD FX 8 and DDR4 at 2133 mhz almost the same speed as the AMD A10 so no big gap here.

Second the benefits of DDR4 haven't been used by home PC's as you need ether larger memory configs like 128GB's or 8 memory sockets on the motherboard to make use of it.

So what did we learn DDR3 is cheaper and no loss here to use it.Also if they have put 2800mhz or 2400mhz RAM in you are frankly wasting money for no benefit because even with overclocking its not worth it.

GPU's
Much like the CPU, GPU's require a great deal of research to find the perfect one for you.

However if you are not a gamer and just looking for some good video play back well then the on-board card might be enough for you with nice CPU and this is the area where AMD A10 is better for this with its 4 cores and 8 embedded GPU's that simply walks all over Intel with 866mhz compared with Intel's low 350mhz.

If on the other hand you want to go dedicated GPU on a graphics card then Intel is the way to go got gaming however and this is a big however if you are looking at video editing of office applications AMD FX 8 will out perform the Intel on the same tier.

Now the next big myth of gaming two cards is better than one... well yes and equally no.
You see before we get into the specs of cards we need to look at humans.

Human's eye can see up to 1000 FPS and, perhaps, above, however the avg for us non super humans is closer to about 70 FPS and when you keep in mind avg TV and video play back is just 60fps you have to ask do we notice? Well yes however not many of us don't notice above 75 FPS and if you don't believe me well look at all your home games consoles and that only show 60 FPS and the same for all your TV programs.

So a single card that can get 75 FPS is not only going to be cheap on the electric bill but also on your monitor as most of them are still 60-75hz unless you go for a high end model.
If on the other hand you want to go that route here are some things you need to think about you need to use display-port or HDMI 2.0 connectors if you want those higher frame rates on Full HD or 4K as HDMI 1.4 will get you only 60hz at Full HD and 30hz at 4K and even most of the monitors on the market won't show more than 60hz at 4K so basically you are forced to game on Full HD if you want higher than 60hz.

So if you want to double you monitor cost and your graphics card cost sure go ahead and use dual graphics card setup if you will even see the frames you might be one of the few lucky people that can see high FPS and has money to burn.

So hopefully now that you understand why i would go for a AMD R9 NANO over a Nvidia GTX Titan X.
Reason one the AMD R9 NANO is about 10 frames on avg lower than its Nvidia counter part, but remembering that this is 90 FPS instead of 100 FPS on a monitor where you can't see more than 75 FPS on your monitor at 75hz

Second reason AMD R9 NANO pulls 75 watts less power than the Nvidia GTX Titan X and therefor has not only a lighter electric bill but less heat to dissipate so you can have a quieter PC apposed to the near air tunnel I've heard from some gaming rigs.

Now if i just wanted a good gaming an lower power i could of gone for GeForce GTX 750 Ti as this uses even less power and is more than good enough for gaming at 60fps however this didn't fit my two screen setup.


SSD vs HDD
Don't worry i'm not going to compare the two as we all know SSD is faster, however do you need SSD for storage?
Wanting more than 1TB of storage i elected to split the money and go for a fast M.2 SSD for the OS and a normal HDD for longer term storage, allowing me the option to use for RAID on storage for the data i really want to keep.

Now normal SSD is about 550mb/s however I went for a smaller number of Gigabytes and get Samsung 950 Pro giving me a massive 2200mb/s read speed that boots my OS performance.
Intel and Samsung are the leaders in the new faster SSD M.2 or PCIe interface however they are not the only ones.

By this selection i got more speed than almost any high end workstation with SSD as most haven't moved over to this version yet, however there are some down sides as well you might not have M.2 port on most motherboards still and you need to be sure its a x4 PCIe socket and not x2 or you will miss the benefit of these drives.

There is one more thing to check before buying your fast SSD and that is that its bootable as some of the early models where not.

Final Build
Here is the component list I used to do this and have powerful workstation without breaking the 1500 Euro mark making this half the cost of some rigs with the same or better performance in some areas.

ASROCK Fatal1ty 970 Performance
Samsung 950 Pro 256GB
AMD FX-8320E
Kingston DDR3 1866MHz 16 gigabytes KIT CL9 HyperX Savage Series Dual Channel
SilentiumPC Gladius M45W
ARCTIC Fusion 550RF Retail
SilentiumPC Fera 2 HE1224
MSI R9 NANO 4G
Seagate Desktop SSHD 2000 GB x3 using RAID 5 total 4TB storage usable.

Final Spec
CPU 3.2Ghz Turbo to 4GHz 8 core 
Memory 16Gb RAM 1866mhz SSD 265Gb at 2200mb/s HDD 4Tb RAID5 190mb/s
Graphics card 4Gb 4092Bit R9 NANO

Now remember I spent this with having a high speed PC however if you use the on board Graphics and don't need some of the storage you can still have the speed at around 600 Euro with ultra fast performance.