Sunday, 30 July 2017

Why Ryzen 3 is OK for new PC but not for upgrade.

If you have an AMD FX chip or an older Intel i3 say the 7100 and lower, it makes no sense to upgrade to a Ryzen 3 as you are going replace your motherboard to get AM4 socket and for those of you with i3 you might want to check if your existing motherboard could take an i7 7700 as the better upgrade option.



With that said if you are rocking an older AMD FX it also doesn't make sense for you to upgrade to Ryzen 3 for the following reasons.

First off, if you are going to go for a new motherboard and CPU then you're already spending quite a bit and the performance over the FX chips isn't that big for the Ryzen 3 so the likely result is you would spend money and after be left with the feeling you didn't get anything for it with only 10-20% gain over your existing setup.  So my advice would be to just save up the extra and get Ryzen 1600 or 1700 since the performance boost you'll get with a Ryzen 1200 or 1300X is not going to leave you saying yes that was totally worth it.

The second reason the Ryzen 3 isn't that much higher than the i3 and we all know that they will be hard pressed to keep up with growing CPU demands in coming year, so unless you want to be doing this again in 24 months best off to just get Ryzen 7 and say your safe till 2021.


On the other hand, if you are looking for a new PC and you want good performance at low cost and you are not into video editing or gaming, perhaps most of your workload is just internet then Ryzen 3 is more than good enough and will likely cost less than the Intel i3.

Now, this is not an Intel or AMD sponsored article and there is more than enough benchmarks to back me up on this.  However, my two cents on this is you shouldn't buy less than Ryzen 5 or Intel i5 in Today's PC and Laptops as just too many CPU hungry applications out there.

Sunday, 2 July 2017

Ubuntu DNS host resolution issue fixed

Unknown host? what why?

The other day I moved from static IP's to DHCP in my VirtualBox lab and once I did that I found an issue that perhaps others are facing.

When the dynamically assigned DNS was able to resolve external domains such as google etc, I was not able to resolve local ones.

The cause of this issue is the /etc/nsswitch.conf and the line hosts.  If you look closely you can see that the dns is after the not found return, now you might be asking how is this working for google then if dns is not used, well that would be the power of mdns and since my dns wasn't supporting mdns it failed.

Original
hosts:          files mdns4_minimal [NOTFOUND=return] dns

Fixed
hosts:          files dns mdns4_minimal [NOTFOUND=return]




So I moved the dns to the second entry of the line leaving everything else the same and poof like magic problem solved.

Wednesday, 10 May 2017

Restoring the Windows Installer

If you didn't know this already is a bad idea to delete files from the windows installer folder.
The reason for this is when applying windows updates window will often look to compare the previous version and if those files are missing the patch will fail to install.

Chances are you know this already if you are reading this.

Now the question is how can you fix it... well I'm here to tell you there is no easy way.
you will need to download all the install media and extract all the service packs and hotfixes that are missing, this is not so hard.

then you need to copy the correct one into the windows installer folder with the right name.
This is where a PowerShell script that from Ahmad Adel Gad might just save you allot of time.
https://gallery.technet.microsoft.com/scriptcenter/Restore-the-Missing-d11de3a1

This will tell you what package are missing and when pointed to the media copy it to the Windows installer folder for you with the correct names.

Monday, 8 May 2017

PowerShell From Linux to Windows and Windows to Linux

Great times are coming, Linux can now manage Windows or Windows can manage Linux.
However you look at it having a cross-platform (heterogeneous networks) scripting is always a good thing, now while PowerShell 6 is still in the alpha stage at this moment I have high hopes that it will make my life allot easier when it comes to moving between server.

With that said I'm going to do a step by step install on ubuntu 16.04 and Windows 2016.
Note at the moment ubuntu 17.04 isn't working this might change in near future.

Ubuntu
you'll first need to import the keys and then download the package.

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/microsoft.list

Now up apt-get
sudo apt-get update

Install PowerShell
sudo apt-get install -y powershell

Start PowerShell
powershell

If you haven't already got ssh server running now's your chance to install it.
sudo apt install openssh-client
sudo apt install openssh-server

You have to now add some lines to the config
sudo nano /etc/ssh/sshd_config

Make sure to enable the following lines:
PasswordAuthentication yes
RSAAuthentication yes
PubkeyAuthentication yes

Add the following line to the Subsystem section:
Subsystem powershell powershell -sshs -NoLogo -NoProfile

Safe the changes and restart ssh server for changes to take effect.
sudo service sshd restart

You are now done with the Ubuntu config, moving on to the windows server.

Windows 2016 or 2012R2

First of all download both the Powershell and the Win32 or Win64 version of OpenSSH
https://github.com/PowerShell/Win32-OpenSSH/releases
https://github.com/PowerShell/PowerShell

Install the Powershell 6 by following the prompts.
For the open-ssh extract to C:\Program Files\OpenSSH or another directory remember the location as you'll need this later.

Inside the extracted Open-ssh folder will be an install-sshd.ps1 script, run it by using the following PowerShell command from the same directory.
powershell -executionpolicy bypass -file install-sshd.ps1

Next while still in the Open-SSH folder create a key.
.\ssh-keygen.exe -A

Now you will need to allow SSH connection into your server.
New-NetFirewallRule -Protocol TCP -LocalPort 22 -Direction Inbound -Action Allow -DisplayName SSH

Next we are setting the Open-SSH to start automatically
Set-Service sshd -StartupType Automatic
Set-Service ssh-agent -StartupType Automatic

Finally, we add some lines to the sshd_config file, open it in notepad and make sure the three following lines are unhashed.
PasswordAuthentication yes
RSAAuthentication yes
PubkeyAuthentication yes

You will also need to add a line under the subsystem next to sftp, now please keep in mind that your powershell 6 path might change if the release is earlier or later than the one I was using so adapt it as needed.
Subsystem powershell C:/Program Files/PowerShell/6.0.0-alpha.18/powershell.exe -sshs -NoLogo -NoProfile

Safe the change and now set a user or system environment variable for Open-SSH
setx PATH "C:\Program Files\OpenSSH" /M

Restart the Open-SSH for changes you've make to take affect
Stop-Service sshd
Start-Service sshd

Now open open Powershell 6 and final step set the remoting by running script
.\Install-PowerShellRemoting.ps1

You can now PowerShell between Linux and windows server without a problem,

Enjoy...



Saturday, 8 April 2017

VirtualBox Test Lab

Recreating a Lab can be painful, however, it needn't be.
By creating and using images you can have your test environment up and running in minutes for when you need to work on a configuration.

For example, I often like to test my scripts and deployments in the lab environment before letting them near a production, mostly to check that I've not made any typo, and because you might want to do something simpler here is how to setup a Lab using VirtualBox.

The first step is to disable the DHCP on VirtualBox, this might sound strange but unless you want to manually set the DNS on all of the images you are planning to use the best option is to run a DHCP from the first server you setup.

Now I like to use the 172.16.1.x range you might want to use another and honestly, it doesn't matter what range you use just as long as its no the same IP as a resource you will need later.




After that you'll need to create your images, this means setting up one of each OS you plan to use, now in my case, that's Windows 2016, Windows 2012R2 and Ubuntu Server.

This part is where you invest your time, for example, Windows 2016 take about 30minutes for me to patch to the current level, however, Windows 2012R2 take close to 1 hour so by updating it to current level and then templating it this will save me hours later of windows updates, or you could choose to just turn off the Windows updates.

Same goes for Ubuntu, while the overall update process is faster I still don't want to have to spend the time to update to current level a large number of patches each time.

So go ahead and create the new virtual server of your choice and finish the install of the base OS.
Now in Windows case, I will do three things after the install

1) I will run the windows updates and that might take some time depending on your internet and how much CPU you've assigned to the VM.

2) I will copy any post-startup configuration I plan to use like scripts to the images.

3) I run the two commands needed to let it start up cleanly each time.

dism /online /cleanup-image /StartComponentCleanup /ResetBase
C:\Windows\System32\Sysprep\Sysprep /generalize /oobe /mode:vm /shutdown

Now the "dism /online /cleanup-image /StartComponentCleanup /ResetBase" cleans out anything I might of done and have left over before I save the image and the second command "C:\Windows\System32\Sysprep\Sysprep /generalize /oobe /mode:vm /shutdown" resets the SID so that if you plan to use this a domain controller you won't run into duplicate SID issues.

At this point you should have an up to date image ready to export as a template, now this is where you use the Virualbox export appliance to create a nice OVA file, this is your final template file.

The process is almost the same for Ubuntu

1) running the sudo apt-get update && apt-get upgrade

2) installing any packages I want to be there at the start like open-ssl and ssh-server

3) cleaning up the server by running the following batch file.

The batch file looks like this removing the logs and cleaning up the history ready for fresh use by the next session.

#!/bin/bash

#update apt-cache
apt-get update

#Stop services for cleanup
service rsyslog stop

#clear audit logs
if [ -f /var/log/audit/audit.log ]; then
    cat /dev/null > /var/log/audit/audit.log
fi
if [ -f /var/log/wtmp ]; then
    cat /dev/null > /var/log/wtmp
fi
if [ -f /var/log/lastlog ]; then
    cat /dev/null > /var/log/lastlog
fi

#cleanup persistent udev rules
if [ -f /etc/udev/rules.d/70-persistent-net.rules ]; then
    rm /etc/udev/rules.d/70-persistent-net.rules
fi

#cleanup /tmp directories
rm -rf /tmp/*
rm -rf /var/tmp/*

#cleanup current ssh keys
rm -f /etc/ssh/ssh_host_*

#add check for ssh keys on reboot...regenerate if neccessary
sed -i -e 's|exit 0||' /etc/rc.local
sed -i -e 's|.*test -f /etc/ssh/ssh_host_dsa_key.*||' /etc/rc.local
bash -c 'echo "test -f /etc/ssh/ssh_host_dsa_key || dpkg-reconfigure openssh-server" >> /etc/rc.local'
bash -c 'echo "exit 0" >> /etc/rc.local'

#reset hostname
cat /dev/null > /etc/hostname

#cleanup apt
apt-get clean

#cleanup shell history
history -w
history -c

This is still a very simple example as you might want to import repositries or copy ssh keys over for quick setup.


Tuesday, 21 March 2017

Enable SQL Server 2016 AlwaysOn Availability Groups Using Windows PowerShell

Always On Clusters are perhaps one of the best things in SQL Server but they take time to Setup unless you have it scripted out.
Here is one of the problems going to each node and enabling always on, now for default named instance this can be easy using the SQLPS and Enable-SqlwaysOn cmdlets.

Using the node name and cycling throw each node of the cluster we can enable always on without entering the names of the nodes one at a time.

foreach ($node in Get-ClusterNode) {Enable-SqlAlwaysOn -ServerInstance $node -Force}

Now that might work for default named instance but what if you have named instance on your server?  Well here is a just as easy way to deal with it using one extra trick from the SQLPS the SQLSERVER:\SQL directory.  Using this we can Get-ChildItem to retrieve the instance name.

foreach ($node in Get-ClusterNode) {$loop = Get-ChildItem SQLSERVER:\SQL\$node\
Enable-SqlAlwaysOn -ServerInstance $loop.name -Force}


This simple trick allows me to build an always on cluster with named instance just as easy as default named instance without even needing the instance names to run.

Saturday, 18 March 2017

XML Templating

Working with XML to do complicated things quickly is great, but creating the XML files can be a pain if you don't work with excel.


So here is how to do that much easier, first of all you need to create an xml file, and they you need to create at least two sets of code so that excel can see that the structure is consistent, once that is done you can import the file to excel to add the data.

So step one create an XML file.
Here is my example file,

<?xml version="1.0" encoding="utf-8"?>
<Computers>
    <Servers>
<Price>350</Price>
<Brand>Hewlet-Packard</Brand>
<Model>ML350</Model>
<Color>Silver</Color>
    </Servers>
    <Servers>
<Price>300</Price>
<Brand>Dell inc</Brand>
<Model>PowerEdge R730</Model>
<Color>Carbon Black</Color>
    </Servers>
    <Servers>
<Price>400</Price>
<Brand>IBM</Brand>
<Model>IBM Lenovo x3650</Model>
<Color>Carbon Black</Color>
    </Servers>
</Computers>

Once you create and save the template, using whatever editor you like, personally it was notepad for me, you'll need to open the data.xml in excel.
Be sure to change the extension type to XML or you won't find it.
Another option is to right-click on the file and choice open with excel.

Once you open it your going to get asked a few questions, first one being, how do you want to open this file, the answer you want is As an XML table.

Next up it will tell you is that the XML file source doesn't refer to a schema so it will create one based on the example in the file.


This is all fine and the file once open will have whatever data you entered into your example XML, however, only columns with data will show in excel so don't freak out if the other coding elements don't show as they are still there.


Last but not least when you save the file make sure you use the XML extension as excel by default will try to save it as an excel file.


And that's it, you can now create very complex data sheets using excel in minutes, without the effort you would have in coding them.

Tuesday, 14 March 2017

SQL how not to cursor

While many of us know we should not cursors we often do as quick loops, and this is not a good practice, so I'm going to show you a very quick example of how to create an SQL statement that will create a command list into a temp table and then run it without a cursor to loop them.

First here is how it might look using a cursor.


IF OBJECT_ID('tempdb..#query') IS NOT NULL
DROP TABLE #query;
CREATE TABLE #query
(
ID INT IDENTITY(1, 1) ,
query nvarchar(4000) ,
);

INSERT INTO #query
(query)
select
'ALTER DATABASE [' + name + '] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE [' + name + '] SET READ_ONLY WITH NO_WAIT;
ALTER DATABASE [' + name + '] SET MULTI_USER;'
from sys.databases where database_id > 4

DECLARE @Sql nvarchar(4000)
DECLARE Cur CURSOR LOCAL FAST_FORWARD FOR
Select Query FROM #query -- table where sql is stored
OPEN Cur
FETCH NEXT FROM Cur INTO @Sql
WHILE (@@FETCH_STATUS = 0)
BEGIN
Exec sp_executesql @Sql
FETCH NEXT FROM Cur INTO @Sql
END
CLOSE Cur
DEALLOCATE Cur;
DROP TABLE #query;


And now here is the same process without using a Cursor to achieve the same results.
Not only does it mean that you are not using a Cursor, but if you look closely it's even a few lines shorter.

IF OBJECT_ID('tempdb..#query') IS NOT NULL
DROP TABLE #query;
CREATE TABLE #query
(
ID INT IDENTITY(1, 1) ,
query nvarchar(4000) ,
);

INSERT INTO #query
(query)
select
'ALTER DATABASE [' + name + '] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE [' + name + '] SET READ_ONLY WITH NO_WAIT;
ALTER DATABASE [' + name + '] SET MULTI_USER;'
from sys.databases where database_id > 4

Declare @sql nvarchar(max)
while exists (select top (1) [query] from #query)
Begin
select top (1) @sql= [query] from #query
exec (@sql)
DELETE TOP(1) FROM #query
END
DROP TABLE #query;


Hopefully, this will help keep your systems as cursor free as possible.

Tuesday, 7 March 2017

Missing TLS 1.2

Most days you get to use existing knowledge and then just sometimes something cools comes your way.

This week we hit on a problem where an application server and client couldn't communicate, you could ping between them and interact with file shares, almost everything looked normal however the application could not connect.

After looking at the event log I found this error:

Log Name: System
Source: Schannel
Date: 11.02.2017 16:37:44
Event ID: 36888
Task Category: None
Level: Error
Keywords:
User: SYSTEM
Computer: FR11.CONSENTO.COM
Description:
A fatal alert was generated and sent to the remote endpoint. This may result in termination of the connection. The TLS protocol defined fatal error code is 40. The Windows SChannel error state is 1205.

This error shows that communication between them that was trying to take place on the SSL was failing.

Closer look at the registry of both the client and the server the problem becomes clear, as the registry keys are not the same.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL\00010003
(Default) REG_SZ NCRYPT_SCHANNEL_SIGNATURE_INTERFACE
Functions REG_MULTI_SZ RSA/SHA256\0RSA/SHA384\0RSA/SHA1\0ECDSA/SHA256\0ECDSA/SHA384\0ECDSA/SHA1\0DSA/SHA1

On the other servers:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL\00010003
(Default) REG_SZ NCRYPT_SCHANNEL_SIGNATURE_INTERFACE
Functions REG_MULTI_SZ RSA/SHA512\0ECDSA/SHA512\0RSA/SHA256\0RSA/SHA384\0RSA/SHA1\0ECDSA/SHA256\0ECDSA/SHA384\0ECDSA/SHA1\0DSA/SHA1

This turns out be a known issue that is addressed with KB2975719, or a manual registry tweak.

Thursday, 2 March 2017

T-SQL with Powershell

If you're a DBA or developer and you need to run T-SQL statements to get you results then this is for you.

Now for me I like to keep a healthy happy SQL Server so I have the best practice building up over the years of my own T-SQL and that was great but became quite hard to connect to ever growing farms and resulted in me creating reporting database, and that then mean checking it adding columns when I created new reports etc.

Nightmare, so here is another way to do it, Powershell to the rescue.
Invoke-Sqlcmd allows me to run the T-SQL against the server for example.


SELECT CONVERT(INT, ISNULL(value, value_in_use))
 AS config_value FROM sys.configurations WHERE name = N'xp_cmdshell'

If I want to run that check in Powershell it looks something like this.

#$instance = Get-Content -Path "C:\instances.txt"

#Check xp_cmdshell
foreach ($server in $instance){
try { 
    $xp_sqlcmd = (Invoke-Sqlcmd -QueryTimeout 200 -Query "SELECT CONVERT(INT, ISNULL(value, value_in_use))
     AS config_value FROM sys.configurations WHERE name = N'xp_cmdshell';" -ServerInstance $server)

write-host "My out put is" $xp_sqlcmd.config_value

 }
Catch { write-host "Neo Broke the Matrix"
        write-host $_  
       break
       }
}

Now I know that looks way longer right? but think about it that can run on 200 servers, and if you want to output to file with let's say some good vs bad values it can look like this.

$logout = "C:\Users\Administrator\Desktop\Results.csv"
#clean up log
$logout | Remove-Item -force -ErrorAction SilentlyContinue

$instance = Get-Content -Path "C:\instances.txt"

#Formating report
"Check,Finding,Server,Rating" | Add-Content $logout

#Check xp_cmdshell
foreach ($server in $instance){
try { 
    $xp_sqlcmd = (Invoke-Sqlcmd -QueryTimeout 200 -Query "SELECT CONVERT(INT, ISNULL(value, value_in_use))
     AS config_value FROM sys.configurations WHERE name = N'xp_cmdshell';" -ServerInstance $server)

$check = $xp_sqlcmd.config_value
"xp_cmdshell check should be return 0,$check,$server,High" | Add-Content $logout

    if ($xp_sqlcmd.config_value -eq "1" ) 
        {
        write-host -BackgroundColor Yellow -ForegroundColor Red "XP_CMDSHELL Is Enabled This is Not Desired On $server"
        }
        else {
             write-host -BackgroundColor Green -ForegroundColor Blue "XP_CMDSHELL Is Disabled On $server"
             }
 }
Catch { write-host "Neo Broke the Matrix"
        write-host $_  
       break
       }
}

Now I have a powerful reporting script and it can get longer better as I can add to it over time without needing a database or manually connecting to each server so in the end.

Note, if you have SQL Enterprise server you could do this my registering each server one at a time into  management studio however this is quicker and more dynamic in the long run.

Saturday, 18 February 2017

Error handling PowerShell

Here are some good things to know about error handling, first there is no one way to do it.  You can use a single file, or many files, log it just to file or output on the terminal, event log email it etc, now I'm going to skip the email one simply because I don't believe that is the best way to use error handling and I get enough mail already.

First of all for testing your error handling you need to create an error, so for this, I have a simple script that gets info from localhost1 since there is no localhost1 it will error.

$Time=Get-Date
{
    Get-EventLog -CN SKB5223 -logname security -After
    (Get-Date).AddDays(-3) | Where-Object
    {($_.InstanceID -eq 4634) -or
    ($_.InstanceID -eq 4624)} |
    Select-Object Index,TimeGenerated,InstanceID,Message | out-host
}

We are using the Try command and then encapsulate the command we are running that way when it fails we can use the catch statement to collect what happened.

Now the outcome of the script above is an error saying localhost1 network "Operation Error The network path was not found." so we know we have an error, and I can fix that by removing the computer name and have it run local or put a good name there, however for the moment lets keep it as is so that we might better test the error handling.

Now as with all things there are many things we can do with the output from the error, let's say I want to log the error to command line output, so I can catch the error.

Catch
{
Write-host -ForegroundColor Red "ERROR HAS HAPPENED"
}

How about logging the error to file, this is almost the same as it uses the catch to collect the output but we need to also define where the logfile will be now since I most likely will have more than one command and therefore will want to reuse the logfile without specifying its location each time I'll use the $ErrorLog variable to store the location.

I'm also going to have two log locations one for error and one for warnings, this way I can keep logging information and errors separate.

##Setting Error Log Location
$ErrorLog = "Error.log"
$WarningLog = "Warning.log"

###
#Script block goes here
###

Catch
{
"$Time ERROR $_" | Add-Content $ErrorLog
}

So we will now look at the dialog box, this is not the most common these days however can still be good if you want to make sure user sees the error when it runs, how I would say these days not to use this as it puts a hold in your script and makes it hard to run it automatically.

Catch
{
$wshell = New-Object -ComObject Wscript.Shell
$wshell.Popup("Operation Error $_",0,"Error",0)
}

If you are using scripts allot and want to know when they ran and when they error, perhaps even using powershell to check the status then writing to the event log might be best for you.
However they are maybe the longest from the point of view that you need to define the even and its error id and log that it will be placed into.

Catch
{
Write-EventLog -EventId 8888 -LogName 'Windows PowerShell' -Message "The Script has incountered and ERR $_ This prevents script finishing" -Source PowerShell -EntryType Error
}

So now lets see all of those in one script and how it might look, if we put them all into one script, below shows all of the example from above with the comments added so that you can see how it might looks with one extra that we haven't covered yet, the break command, that is used when you don't want to continue with next action in the script.

$Time=Get-Date
##Setting Error Log Location
$ErrorLog = "Error.log"
$WarningLog = "Warning.log"

##Write beginning
write-host -ForegroundColor Red "STARTING TO GET SECURITY LOG $Time"

#Trying command
try
{
    Get-EventLog -CN SKB5223 -logname security -After
    (Get-Date).AddDays(-3) | Where-Object
    {($_.InstanceID -eq 4634) -or
    ($_.InstanceID -eq 4624)} |
    Select-Object Index,TimeGenerated,InstanceID,Message | out-host
}

#If error capture
Catch
{
#write error to file
"$Time ERROR $_" | Add-Content $ErrorLog

#write error to eventlog
Write-EventLog -EventId 8888 -LogName 'Windows PowerShell'
-Message "The Script has incountered and ERR $_ This prevents script finishing"
-Source PowerShell -EntryType Error

#write error to console
Write-host -ForegroundColor Red "ERROR HAS HAPPENED"

#write error to popup
$wshell = New-Object -ComObject Wscript.Shell
$wshell.Popup("Operation Error $_",0,"Error",0)

#stop script
exit
}

Since I raised the point about break I should also show you another way to handle this within the command using the "-ErrorAction Stop" however the break makes it easier not to forget.

You can also choose to take an action as part of the error handling, for example, reboot server or any number of other actions.

Friday, 20 January 2017

Cheap Global Website Hosting

One of the most interesting things you can do with a website is to have it protected, and the source located in a safe harbour, that could be in a data centre that is local to you or one that has good data protection laws.

But in all cases the website must be visible to the outside world, this can be done with CDN (content delivery network) so that all users have the lowest latency.

Now you can build your own CDN and I'll go on to explain how in a later post or you can use one of the existing commercial ones like CloudFlare.

Behind that you need something to present your CDN with data this is your highest layer of the website and should ideally be a caching server with a master one or two layers down.  In my case, the proxy layer runs on port 80 but is getting content from port 8080 that is an ssh tunnel to the source, so while a full content is cached master is still accessed in case of changes.


Now I know what your thinking, that's fine for static content but what about dynamic content.
Basically the same thing the only difference is how you store data if the poxy's run a full content with the database or if the source handles all the write actions and leaves reads to the proxy.


The illustration above shows a more practical version of the layout as looking at these things in a flat view is often hard to understand.

So what are the advantages to this design? to start with you have reduced the load to the source webserver to only write changes allowing all reads to be cached where possible at the proxy and CDN layers,

So a typical design like this can handle 1000 users per webserver, giving this design to handle 6000 concurrent users for about $300, impressive when you think that you could easily have 10,000 users and just 6000 are active at any one point.

The design also allows for better security as the source is never publicly exposed to the security risks as two layers would need to be traversed just to get the location of the source, also since much of the content is cached updating the source is all that is needed to release a new version.