Sunday, 16 July 2017

A question on index and statistic columns.

This post is a question around how SQL Server creates statistics for a new index or in other words - how can the columns for an index and it's statistics be the opposite away round from one another?!

Here's what I originally found when having a poke around a database; it's a pretty basic clustered index (with names blanked out to protect the innocent).


We can see that the leading column is a varchar(20) type and the next one is varchar(50). Now let's have a look at the statistics for this particular index, just for info this is the only index on the table.


This time the the leading column is the varchar(50) which is then followed by the varchar(20) column, hmm. Now column order is pretty important and interestingly enough the varchar(50) column is actually the more selective of the two so I wondered if this why perhaps the statistics are in a different order. 

In order to test this I've used an old SQL 2014 test database that's been hanging around on a dev instance of mine. It has a very simple table composing of an ID field, first name and last name. Incidentally the first name and last name columns are varchar fields with lengths of 20 and 50.

Here's a new clustered index based on the first and last name columns:


After I have created it (yeah I know the name sucks btw) I'll check the statistics:


That seems fine, or at least the order is the same as the index, which we'd expect. 


Now lets recreate the index but modify the column order so it looks like this, with the last name now the leading column:


Now if I check the statistics...


They are in a different order to how I have just defined my index. 

Now originally I did wonder if the statistics had been manually altered, however just to rule that out if you try to change the columns of statistics in SSMS, you get the following:


Now of course this is with a clustered index, what happens if I try the same with a non clustered index?


Here is my new index where once again I have altered the column order. The statistics this time look like this:


Okaaay, this time the statistics reflect the column order of the non clustered index that I've just created. This makes (at least in this example), the statistic creation process different between a clustered and non-clustered index.

So the question, why the difference? Has SQL Server decided on the best column order for statistics for the clustered and non clustered indexes or has the creation process for the clustered index just not picked up on the column modification or does it even use another method when creating stats?

Monday, 19 June 2017

VIEW SERVER STATE

Quick reference post on the VIEW SERVER STATE permission within SQL Server. This is a server level permission that once granted enables a login to view the results of Dynamic Management Objects.

I find that it's typically used for troubleshooting or performance tuning related activities and is a good alternative to the good old sysadmin role membership route, especially for external people.

To demonstrate what the permission allows I'll first create a new login on a test instance with the following command:

CREATE LOGIN SQLClarity WITH PASSWORD = 'SQLCl@r1ty' 

Now I've logged into Management Studio with the credentials I've created above. So let's try to select records from a DMV, in this case my instances cumulative wait statistics:

SELECT * FROM sys.dm_os_wait_stats

I get the following error:

Msg 300, Level 14, State 1, Line 1
VIEW SERVER STATE permission was denied on object 'server', database 'master'.
Msg 297, Level 16, State 1, Line 1

The user does not have permission to perform this action.

SQL Server has been quite specific on how to resolve the issue by stating that the VIEW SERVER STATE permission was denied. 

There are a couple of ways we can grant this permission, from the server properties > permissions window as in the image below. Remember that although the error message indicates the issue is on the master database it is a server level permission not a database one (such as view database state). 


Or we can use T-SQL syntax such as the following:

GRANT VIEW SERVER STATE TO SQLClarity

Now the results from the DMV are visible without error. 

This is a really useful way of restricting access for what could typically be viewed as an administrative task, however, one final word of caution though. This permission is applied at the server level and gives access to all of the Dynamic Management Objects and whilst in this particular case something like wait statistics might not be that sensitive the DMVs and DMFs do expose a lot of information so you have to bear this in mind when applying this level of permission.

Tuesday, 13 June 2017

Databases and DevOps



This is my post for T-SQL Tuesday #91 hosted this month by Grant Fritchey, the subject this time around is Databases and DevOps. For those who aren't aware what T-SQL Tuesday is it's essentially a monthly blog party where the host (Grant this time) will decide on a topic and fellow bloggers will write a related post on the subject; you can read more about it here.

My post is going to be rather high level (what's new I hear you say!) and that's because this is where I often see DevOps fail, people don't quite grasp the fundamental concepts and requirements to make it work, but to begin with, sing a long with me for a second: 

Now this is a story all about how
My life for flipped-turned upside down
And I'l like to take a minute
Just sit right there
and I'll tell you about how implementing DevOps sometimes fails. 

Okay, it doesn't rhyme, I stole the lyrics and I certainly can't rap (not without whisky anyway) but for some people the concept of DevOps does bring with it the idea of having their (working) life being flipped upside down. People get confused about what it all means and this can cause resistance, an unwillingness to look at what DevOps is trying to achieve and essentially hold on to their current way of working.

The main cause of this tends to be how people go about implementing DevOps. I've seen organisations sort of grasp at the concept, try to introduce it too quickly or even try to impose it and then, quite understandably it fails miserably each time. 

People (or indeed companies) tend to focus on the lower levels of DevOps or even try to get the benefits straight away, the "continuous this" and the "continuous that" when in actual fact they're not even starting at the right place, a case of crawl before you can walk if you like.

The phrase DevOps is the bringing together of two different terms; Development and Operations so to make a success of it we need to think along those exact same lines. That means we need to focus on two things; communication and collaboration.

Communication is easy right? After all everyone kind of talks to one another so what's the problem? Well look at the traditional relationship between Developer and the DBA (operations). Both have been working very different styles for many years now; developers are making constant changes, pushing out releases as often as possible whereas the DBA is trying to put the brakes on and keep the systems in a stable state. 

This often results in push backs and whilst they will certainly communicate, it might not necessarily be the right kind of communication and now we've got to try to get them to meet in the middle somehow and work in a very coupled fashion! 

Fundamentally what is needed is an understanding of each others role. For me this is the real starting point of DevOps and although in some cases this will mean the breaking down of walls in no way is it an impossible task. Introducing each others way of thinking without trying to abolish the existing mindset but rather have a purpose of helping one another is how this common approach should be formed, and taken advantage of. 

Side Note: I have noticed that this sounds a lot like couples therapy!? Is this the real meaning of DevOps - are we been healed somehow!?

Ultimately this mutual understanding results in much more solid foundation that can be used to then implement the lower levels of DevOps such as the different technical methodologies and tool sets.  

Some of the most successful DevOps cultures that I have seen are where teams contain developers that are ex database admins and vice versa - yeah it's true, people actually do this! In these cases people haven't just brought their technical skills over to a new team, they've brought their understanding of the other functions too and will often use that in a co-operative manner to find the best solution - essentially, isn't this what DevOps is all about?

Now I am not saying that anyone should start shuffling around their IT department because that's the last thing you want to be doing. You can't force or impose this concept, it needs to grow and to some degree let people find their own ways of understanding and working with one another. Whilst challenging yes, the process doesn't need to be threatening or overwhelming in any way and done correctly it won't feel like that. 

So to go back to the lyrics right at the beginning of this post, no it won't flip your life upside down we just all need to take a minute, sit right there, and learn from one another.

Monday, 12 June 2017

SQL Server on Ubuntu - Installation Overview

As I have gotten into the habit of writing follow up articles lately this one is no different and is an overview of my last post where I installed Microsoft SQL Server onto the Ubuntu Operating System.

The article is a bit link heavy and that's because I wanted to provide links to the web pages that I have been using to construct this article. One of the great things that I have found about the Linux platform is its documentation, there is a wealth of information out there, both official and via blogs and forums. You certainly get the community sense from these pages, as you would do SQL Server I hasten to add!

I started out using the guide available from Microsoft: https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-ubuntu which is a pretty standard instruction document for getting SQL Server installed on to Linux. Although it is a pretty straightforward process I did have to deviate from the document from time to time, that's mainly because I have very little Linux experience so it's a good way to get used to using the CLI (Command Line Interface).

Here's the first command:

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

This is actually where I got my first problem and I kind of skipped over it in the first article and went straight to the solution. Essentially the problem is that if you're following the guide without having installed curl you will get an error like I did:
The program 'curl' is currently not installed.  You can install it by typing:
sudo apt-get install curl
Curl is a tool that enables us to transfer data to or from a server and specifically in this command we're attempting to import the public repository GPG keys from https://packages.microsoft.com/keys/microsoft.asc which will enable us to install the SQL Server Ubuntu repository.

So, this didn't work but the message rather handily gives us a solution!

sudo apt install curl

Let's break it down a little bit. First sudo, which is giving root permissions to a particular command this is as opposed to sudo su which I had to do later on in the install to switch to superuser mode for the session.

Next is apt. Apt is a command line tool which works with the Advanced Packaging Tool and enables to perform installs, updates and removals of software packages. In this case we're installing curl so we use the install command. 

At this point our command is saying, as a superuser use the Advanced Packaging Tool to install; and finally curl. That went off fine and now I tried to run the command once more:

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

This failed again with a connection refused error and my initial thought was that perhaps there was some network configuration that I need to do in the VM or indeed Ubuntu but a quick search brought me to the sudo su command.

Now I have to admit, I'm still reading into the differences between sudo, sudo su etc and I encourage anyone to pick the brains of any Linux friends they have on the security layers because whilst at a high level I can see that sudo is a one time prompt for root permission whereas sudo su actually switches user and because no parameter is specified it switches to the superuser account by default.

This enabled me to install the GPG key; the apt-key command is used to manage the keys within the Advanced Packaging Tool, add is going to add a new key to the list of keys. My assumption is that because we have specified the microsoft.asc file that the - specifies that the key is retrieved from there:


 add filename
           Add a new key to the list of trusted keys. The key is read from
           filename, or standard input if filename is -.

So now we're ready to register the SQL Server Ubuntu repository:

curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list
| sudo tee /etc/apt/sources.list.d/mssql-server.list

A repository is essentially a collection of software for Linux. We use tools to get information about the repository then download and install the software from the designated servers. Microsoft uses two repositories for software that it builds for Linux, prod which is used for commercially supported software and mssql-server which contains the packages for SQL Server. 

Once registered we can install SQL Server. The apt-get command is another command for APT, this time we are using apt-get update which download the latest package lists and latest information for all the repositories. We then use apt-get install to tell APT we're installing a package, -y to automatically answer yes to all prompts and then finally mssql-server which is our package.

sudo apt-get update sudo apt-get install -y mssql-server

That is it as far as the actual install is concerned but we now need to configure our SQL Server. To do this we use mssql-conf tool. mssql-conf  allows us to make several changes that are very familiar for those who are used to administering SQL Server such as modifying file locations or enabling/disabling trace flags. 

sudo /opt/mssql/bin/mssql-conf setup

In this case we are using the tool to perform setup which allows us to specify the administrator password and once set we are informed that SQL Server has started. The final command systemctl is a central management tool that  enables us to perform various service management tasks.

systemctl status mssql-server

Here's the final screenshot again that shows the Microsoft SQL Server service up and running on Linux. The whole process was extremely straightforward and I'm looking forward to getting some of the other tools installed and start putting the server through it's paces. It's worth adding that the VM is running on my laptop quite happily so as long as you have 3.5Gb RAM available for a Linux box then a fully working test instance is something that is very simple (and free) to create.



Saturday, 10 June 2017

Installing SQL Server on Ubuntu 16.04.2


This article is going to show you how it to set up a Linux/SQL Server test environment which will consist of a virtual machine using the Ubuntu Operating System with the latest CTP release of SQL Server 2017 running on it. 

The plan to make SQL Server available on Linux was announced way back in March 2016 and with the recent announcement of the SQL Server 2017 (and subsequent CTP releases) things certainly appear to be right on track for SQL and Linux. 

It's worth adding that in recent weeks I have started to see organisations really take up the idea and have spoken to a few people who are creating their own test boxes and started to think about how to use this combination. Not only that, they've also started to ask for people with the right technical knowledge too so perhaps if you are a DBA who hasn't had a bit of exposure to Linux then now is probably the right time to start! All in all though, it's an encouraging sign for Microsoft.

Anyway, before we get going I'm going to be using VMWare Workstation 12 Player to create the Ubuntu virtual machine, you can download the software from this link and use VM's for non-commercial use.

To start off I need to download the Ubuntu Operating System ISO, which is available from here where you will find the following two download options:


I went for the 16.04.2 LTS version and once downloading and the following message was displayed, which I had great delight sharing with my open-source buddies (this one is for you Adrian).


Once the download has finished I can open up VMWare Player and select the Create a New Virtual Machine option as shown in the image below:


From here I can choose how the Operating System will be installed; from a DVD in my machine, from an ISO (what we will select) or I can install an OS later. Here we can see that I have browsed to the downloaded ISO file and the install process has recognised that it's the Ubuntu 16.04.2 OS.


After clicking next some user details are required:



Which actually tells me off as user names can only be lowercase apparently so I fixed that and carried on to the next part where I need to specify a name for my new Virtual Machine:



Clicking next takes me to the disk capacity screen, I left the options as default so used a 20Gb max disk size and left the split virtual disk into multiple files option selected:




After clicking next we move on to create the virtual machine however before we click Finish and proceed with the create/install process I need to make a slight modification to the configuration of my VM.



The system requirements for running SQL Server on Ubuntu 16.04.2 contains the following 

Note
You need at least 3.25GB of memory to run SQL Server on Linux. For other system requirements, see System requirements for SQL Server on Linux.

On the create VM window the Memory is currently set to 1024 MB so by clicking the Customize Hardware button I can change the allocated memory to 4GB (4096 MB) as in the screenshot below:



I can then click Close as there are no more hardware configurations that I need to make and now I can click Finish and the install process will start; if prompted to install VMware Tools for Linux then go ahead and Download and Install.


Pretty neat install screen, you just don't see enough purple these days!

Once installed the virtual machine will reboot and Ubuntu will start. I get presented with a login screen where I need to enter the username and password that I specified during the install process and now I am ready to go!

Ubuntu!

Now I was following the initial guide that is available here but ran into an error at the very beginning when trying to import the CPG keys, wasn't a biggie as it meant I just didnt have the curl tool so I just had to run the following command first:

sudo apt install curl

Before I could successfully run the curl command:

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

This brought up a install progress type screen and because the Time Spent value was increasing I figured things were progressing...however after a few minutes I was given a connection refused error!!

A little bit of digging around soon led me to a solution, superuser mode which kind of reminded me of the run as administrator option in Windows a little bit.

To start superuser mode type the following:

sudo su

Then I was able to run the curl command once more for info here's a screenshot containing the connection error and the subsequent sudo su and completed curl command:



Now for the next step I need to register the mssql-server repository:

curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list |
sudo tee /etc/apt/sources.list.d/mssql-server.list

No errors, this is good and now I can also quit using superuser mode at this point:

exit

Now for the actual installation and to do this I need to run the following commands which will go ahead and install SQL Server:

sudo apt-get update sudo apt-get install -y mssql-server


Next step is to run the conf-setup to specify and confirm the administrator password for SQL Server:

sudo /opt/mssql/bin/mssql-conf setup

Success!

SQL Server is now installed but I can run a quick test to see if the service is running correctly by using the following command:

systemctl status mssql-server

Which brings up the following screen (with a reassuring green selection of text):


That's it, I now have a brand new test instance of Ubuntu with SQL Server running quite happily, for now at least! 

Thursday, 8 June 2017

Update!!

Image result for update

As I have been busy doing all sorts lately thought I would write a quick post on what has been going on!

The consultancy website has had quite a few updates over the last couple of weeks. We've started to send out a few posts on social media to introduce people to the training courses we are running this year which we had a really good response to. So much so that we rather hastily added a frequently asked questions page to provide a bit more information to site visitors.

We've also added a new administration service where we provide regular maintenance and management tasks for a set number of hours per month. This is something we've been actively doing for some clients already and I can see this being an area of focus over the next year. 

Although I am biased I can certainly see the benefit for those organisations that don't have a DBA in place as they can offset their database maintenance for a set duration knowing that they have our expertise without high end day to day costs. It's worked really well for some clients and often is a temporary measure until a permanent DBA is brought in. Very interested to hear views and opinions on this way of working.

So I expect more pages to be added to the website and I'll start doing some regular update posts on here on new services and how the consultancy journey is going etc etc.

I've also posted some new reads on SQL Shack the Apex SQL community website. These posts are focused on using Dynamic Management Objects to monitor SQL Server. I've introduced some T-SQL fundamentals in there so it's a very entry level read and hopefully I will be adding some new articles on the same theme over the next few weeks.

Other than that, it was the rather excellent GroupBy conference last Friday and I caught the presentations by Kody Conior and Alex Yates before getting sidetracked by something or other. For those who have no idea what GroupBy is (and that won't be many, if anyone at all), it's essentially a free online conference where you get very awesome presentations by experts on a wide variety of subjects, all for nothing.

This is a GREAT idea and I use it to listen to presentations on subjects that might not fall into the traditional type of activity that I am used to working with but I feel I need to raise my own awareness of. So for example, if you're a DBA you might want to catch up on some development related presentations or vice versa. Without mentioning the rise of DevOps and the modernised platform (yardy yardy yardy) and how all of these areas are actually very useful its a great opportunity to spend some time in non-familiar areas without the cost of a conference, not to mention you don't have to leave the house!

So that is all for now, by all means drop me a line on anything SQL related, if you have any feedback on the website or anything else for that matter!

Wednesday, 7 June 2017

Why I became a SQL Consultant.

This post is kind of a follow up to my recent "How I became a..." article that I wrote about my early days of a DBA. You can read that post and plenty of others from database professions on this page provided by Kevin Hill. The posts are seriously good reads and although needless to say, I cannot recommend them enough!

This post though is centred on the biggest and most difficult decision that I have ever had to make about my career and that is to quit working as an employee and become a consultant. I've been wanting to write this post for some time and because lately I seem to be getting asked at least half a dozen times a day why I made this decision now seems an ideal time to get it out there.

Firstly I need to paint a bit of a picture of my situation prior to making this change. I was in a great place; brilliant role, fantastic manager (you're welcome), using lots and lots of technology, pretty much everything you could want. Which of course does beg the question, why did I decide to move away from this?

Don't get me wrong, I'd had this sort of vague aspiration for a long time, largely because of the amazing people out there doing seriously great things already and whilst they were (and still are) a big influence on me it wasn't just a case of "hey I want to do that" and off I went. 

I did a lot of thinking about what I enjoyed the most from working with SQL Server and how I could take that forward. For me the most appealing thing has always been helping people get the most from their SQL Servers. As a DBA this can take many forms; it could be installing and configuring SQL Server as part of an entire enterprise solution, or it could be an upgrade to a newer version, performance tuning, bug fixing, implementing HA/DR, training, mentoring etc etc. The point being that whatever I was doing or whoever I was working with the motive was always the same, to make a particular SQL platform better in some shape or form.

Which is exactly how I think as a consultant.

In fact as a consultant I operate in a very similar way only now it's on a larger scale. I still have exactly the same motives as before but whereas in a permanent role I was working within a single instance, now my driver is to help as many organisations as I can with their data platforms. 

In many ways it's like scaling out a database (bear with me) but rather than distributing data I'm distributing my services and (hopefully) doing more work. Of course that comes with its own set of challenges which I will go into on another post.

Since I made the move to consultancy the #1 question I have been asked is why, hence this blog post and the answer is always the same. I've always loved working with SQL Server, with different people and different systems etc so becoming a consultant was a way to expand on that and reach out to even more people and help them achieve what they want with their data platforms.

If you'd like to know more about the consultancy and the services we provide then you can follow this link and by all means if you have any questions, feedback then by all means get in touch, be great to hear from you.

Thursday, 18 May 2017

Installing Python and using the new command line T-SQL script generator

Earlier today two new command line tools were announced for SQL Server, one an experimental Linux tools DBFS which enables access to live DMVs without using a UI like SSMS and secondly a tool that enables script generation of objects within SQL rather like the Generate SQL Scripts option in SSMS.

In this post I'm going to run through the installation of the script generator tool and provide a very quick demo. The reason I'm going through this is because in order to install the tool we need to use something called PIP. PIP is a package management system that enables us install and use packages written in Python. Yeah, Python again! 

So if you haven't already installed Python, because it isn't available by default in Windows you'll need to get it from here:

https://www.python.org/downloads/

This will take you to the download screen and here you get two version options; 2.7.13 or 3.6.1, I went for the latter and there is a link that explains the differences between version 2 and version 3 of Python: https://wiki.python.org/moin/Python2orPython3



And this is the installation screen, before hitting the install option make sure the Add Python 3.6 to PATH is selected otherwise things won't be working as expected after install.


Nothing untoward in the actual installation and we get a successful message in a couple of minutes.



So now that Python is installed we need to install the mssql-scripter package and that is very straightforward and rather familiar to those using PowerShell I guess. We use the command: pip install mssql-scripter from a Windows command prompt ran as administrator.



The installation package goes and does its thing!


Again, after a couple of minutes we get a couple of done messages and returned to the command prompt, installation all good.


Once complete we can pass in a -h parameter to the mssql-scripter command and see a wide array of options that are available to us but in this demo I'll keep it simple and you can refer to the original link to see the different ways of using the tools.


To use the command I've typed out the following which will create a script (adventureworks.sql) in my working directory containing the DDL and DML commands for the objects in the AdventureWorks2014 database:

mssql-scripter -S localhost -d AdventureWorks2014  > ./adventureworks2014.sql

Worth noting that although mssql-scripter is brand new I am using this on a SQL Server 2014 instance. 

The script ran in just a few seconds which is very impressive and it produced the file as expected and whilst I'll not put the entire script content in this post here is a screenshot of the script opened inside SSMS.


Now I did work through some of the examples on the release article however I did run into unrecognized argument errors when using -schema-and-data which is shame as I did want to test that functionality. I'll await a comment back on that one and then perhaps run through some more scenarios of using the command.

Edit: I did parse the created script: 

Command(s) completed successfully.




New posts on SQL Shack

I've just posted a couple of articles on SQL Shack, the Apex SQL community website. The first post introduces Dynamic Management Objects and how we can use them to monitor SQL Server, it's an entry level read and introduces some T-SQL fundamentals along the way.

Then I've followed up that post with an article focusing on sys.dm_exec_requests. This is a very common DMV to use as it shows all active requests on a SQL Server and this article explains some of the key columns to use in monitoring/troubleshooting.

Here's a link to the first post; sessions and connections and the post contains a link to the next part of the series at the bottom of the page.

Hope you'll find them good reads and as always feedback is more than encouraged, love to know what you think.

Tuesday, 9 May 2017

A little bit more on how I became a...

My last article was a post for Kevin Hills brilliant collection "How I Became A" where data professionals share their experiences working in the world of SQL Server. I highly recommend having a browse, there are some great reads in there and advice too. Without a doubt you should get your own story on there, we all have a very different journey in one way or another and different experiences mean different viewpoints that can certainly help others - so get writing!

Writing my own post did mean that I spent some time spent reflecting on my early days as an accidental DBA and as a result of the post I received a couple of questions which I'll answer here.

The first was on certification and if I went on any courses and if I found getting certified useful. I didn't go on any courses and my training was done via a couple of books that I purchased and most definitely Books Online. I felt passing the exam was a great achievement sure and because I was just starting out it did definitely add some weight to my CV so it did help but it was the both the first and last SQL exam I took. 


Where it all began, kind of.

I mention this because since the early days my learning has been without examination. I've certainly spent a lot more time studying since using a huge wide range of different methods but never felt I had to have the certificate. However, I do feel we are in the midst of a huge technical shift in both SQL Server and the Data Platform as a whole and I think certification will prove very useful for all levels - interested to hear views on that.

I also got asked to describe the low points of being a DBA. It's safe to say every job has its ups and downs and database administration is no different. The incidents that stick in mind are those that tend to involve long nights, such as lengthy deployments or when disaster has struck! Disaster Recovery is a fundamental activity for any DBA and even though the DR plans are well practiced they're never much fun in reality.

The first major disaster I was involved in struck mid afternoon and took until 4AM the following morning for everything to be operational again. Then I was on the obligatory progress report at 7AM which meant zero sleep that night. It was a low at the time and it certainly wasn't the last late one but actually when everything came back online and the formalities were sorted it was quite a satisfying experience in retrospect.

I've enjoyed looking back at some of my favourite moments and thinking about how much I've enjoyed, no enjoying the journey. Whilst things are changing, undoubtedly, It really is a great time to be working in the Data Platform and I encourage everyone to use posts like Kevin's to share their journey too!