T-SQL Tuesday #171 – Describe the Most Recent Issue You Closed

Invitation and roundup from Brent Ozar.

Your readers wonder what kinds of jobs are out there in the database world, what exactly it is that you do, and what your daily grind is like. While it’d be cool to cover all of that, let’s start with something simple.

Your mission for this week: write a blog post about the last ticket you closed, and schedule it for next Tuesday, February 13.

It doesn’t have to be T-SQL. T-SQL Tuesday has evolved to cover all kinds of data topics.

The task/issue doesn’t have to be indicative of your overall career. Our database jobs cause us to do all kinds of oddball things through the day. Go into your ticket system, help desk system, list of Github issues, or task list right now, look at the last task you checked off, and blog about that.

Don’t include company specifics or anything that might get you in trouble. Just talk in general terms about:

  • Why the task was created (an error popped up, a user had a problem, your boss had an idea, whatever)
  • General terms about work you had, what online resources you found helpful, how long it took
  • How often that kind of task pops up in your queue

T-SQL Tuesday #166 – Why Not Extended Events?

Invitation from Grant Fritchey.

With 165 T-SQL Tuesday events, two, just two, this one, T-SQL Tuesday #166, and another one back in 2018 or 2019 (I forget and I’m far too lazy to go look) have been on Extended Events.

At conferences I’m frequently the only one doing sessions on Extended Events (although, sometimes, Erin Stellato is there, presenting a better session than mine). I did a session at SQL Konferenz in Germany earlier this week on Extended Events. Hanging out in the hallway at the event (which was great by the way), I was talking with some consultants. Here’s their paraphrased (probably badly) story:

“I was working with an organization just a few weeks back. They found that Trace was truncating the text on some queries they were trying to track. I asked them if they had tried using Extended Events. They responded: What’s that? After explaining it to them, they went away for an hour or so and came back to me saying that had fixed the problem.”

We all smiled and chuckled. But then it struck me. This wasn’t a case of someone who simply had a lot more experience and understanding of Profiler/Trace, so they preferred to use it. They had literally never heard of Extended Events.

Why?

Search Engines

I did a search on BingGoogle and DuckDuckGo. The results were instructive.

The top result on Bing was to a 14 year old StackOverFlow post. To say the least, yeah, it’s not showing anyone how to use Extended Events. It talks about DMVs in addition to Trace/Profiler.

The top result in Google was a site I’ve never even heard of before, Sit24x7.com. It talked about DMVs, and nothing else. I couldn’t find a publish date on the article, but since it didn’t talk about sys.dm_exec_procedure_stats, but only talked about query_stats, either the person writing it was ignorant of more recent DMVs (and, let’s be clear, saying that procedure_stats is recent is s stretch), or this is a very old article indeed.

The top result in DuckDuckGo was a post on SQLShack written in 2017 using examples from SQL Server 2016. The tools used were, kind of oddly, Activity Monitor within SSMS and Query Store. No mention of Profiler/Trace, Extended Events, or even DMVs. The second result was the 14 year old Stack overflow post.

If you were looking to identify a long running query, you might be lead to believe the consensus is that the only tool to use is DMVs.

The Whole First Page

The common wisdom is that people never go beyond the first page of search results (I’ve no idea if this is true). So, what’s on the first page?

Well, Bing had eleven results that weren’t ads when I ran the query linked above. It wasn’t until links 7 and 8 that Extended Events are mentioned. Further, links 7 & 8 were the same article, just published in two different places with a few edits between them. Four of the links were to Microsoft and NONE (zero, 0, zip, nicht) of those mentioned Extended Events, although they did talk about DMVs. Of the top 10, most of the links were old. Many 10 years or more. The two links, 7 & 8, were the only ones to mention Extended Events.

With Google I saw 10 non-sponsored links on the first page. Many of them were duplicates of the links in Bing, just in a different order. Link 4 was to the same article I found in Bing. Link 5 was to a new source that did had Extended Events as the #1 tool for gathering query performance metrics. There was only one Microsoft link, duplicated from Bing, and it didn’t list Extended Events. Just like with Bing, most of the links were old. Many of the links from Google were older than the ones from Bing.

DuckDuckGo was just a little better. The 3rd and 4th slots had two different articles talking about Extended Events with the 3rd slot being the same article from both Bing and Google and the 4th slot being a new one. Three of the slots were Microsoft and again, no mention of Extended Events. And, once more, many of the links are minimum 7 or 8 years old, but some being 13 or 14 years old.

Conclusion

We can have a lot of discussion about the technical aspects of Extended Events. We can also talk about whether or not you should, or shouldn’t use Extended Events. The simple fact of the matter is, there’s a good chance that people aren’t using Extended Events, not because they’re problematic, hard, contain XML, muscle memory, or any of the other issues that I, and others, bring up, but instead, because they simply don’t know that they exist.

So, if you are #TeamXE, not only do we have to overcome years of bad information and indifference due to a poor launch (2008 XE just wasn’t good, let’s be honest), but the fact that the way search engines work, Extended Events may be hidden from many people.

T-SQL Tuesday #158, Implementing Worst Practices

Invitation from Raul Gonzalez.

One of the most repeated answers to almost any question asked within the SQL Server community is that “everything depends”… Can that also apply to known best practices?
 

Furthermore, is it possible that some of the commonly agreed “worst practices” have indeed some use case where they can be useful or suit an edge use case?
 

This month I am asking you to write about those not-so-common practices that you may have implemented at some point and the reasons behind it, I have a few in my pocket that will make more than one a bit uncomfortable 😀

T-SQL Tuesday #157 – End of Year Activity

This month’s invitation and recap from Garry Bargsley.

Welcome to the final T-SQL Tuesday for 2022. My ask is, what do you have planned for end-of-year activities for your SQL environment? Do you have annual processes or procedures you run? Do you clean up documentation? Do you just take time off and hope someone else does the work?

Some Examples:
  • Purge log data
  • Archive databases for long term
  • Look for orphaned data/log files on your SQL Servers
  • Do Security analysis for no longer needed accounts
  • Add new years dates to dimension tables

T-SQL Tuesday #152 – It Depends

Invitation and round up from Deborah Melkin.

I came to a realization lately that I have a few opinions about databases. And I’m pretty sure that you do too. After all, I’ve read your blogs, chatted with you, and seen your Twitter rants.

But we’re database professionals. It’s supposed to depend, right?

Except we all have experiences that shape how we approach our work. One minute your coworker asks you a question about doing X. You reply with “It Depends…” leading into a 5-10 minute rant. This may include some or all of the following:

  • Stories starting with “that one time at that client”
  • References to blog posts you read\wrote\should write
  • Commentary on code – the good, the bad, & the ugly
  • Personal theories and philosophies on the topic

All of this is followed by “Thank you for coming to my TED talk” and a “I’m sorry, what was your question again?

So yeah… this may have been inspired by an actual conversation… or two… or ten. I apologize to my coworkers… again…

So for this month’s T-SQL Tuesday, I want you to give us that rant. Tell us about the experiences, the code, the posts that inspired you, and all the gory details in between. And what is it that makes you so passionate about this topic that “It Depends” gets tossed out the window? Pull out your soapbox and tell us all about it

T-SQL Tuesday #147 Invitation – Upgrade Strategies

Invitation and wrap-up from Steve Jones

Planning for Upgrades

In my career, most of the time we don’t upgrade production databases very often. In most of my jobs, we’d change versions for new databases, but existing ones often lived on their original version. It’s how I got into a job where I was managing 4 different versions of SQL Server. These days I expect it’s common for many DBAs to have to deal with that many, or more, versions.

I do have customers these days that try to upgrade often, and limit the number of versions they work with. I have customers now that are on a mix of 2016-2019 only, some that might be working on 2014-2016 only, and I’ve run into a customer that only has SQL Server 2017. Of course, they have few databases and look to upgrade about every 5 years when mainstream support is running out for their edition.

This month I want you to write about how you look at SQL Server upgrades. A few things you might think about:

  • Why we wait to upgrade?
  • Strategies for testing an upgrade
  • Smoke tests or other ways to verify the upgrade worked
  • Moving to the cloud to avoid upgrades
  • Using compatibility levels to upgrade an instance by not a database.
  • Checklists of things to use in planning
  • The time it takes to upgrade your environment
  • What you evaluate in making a decision to upgrade or not?
  • Anything else

I don’t know when SQL Server 2022 will release, but certainly many of us will need to consider in 2023 whether we want to upgrade systems or not. Think about it and write about something that matters to you.

T-SQL Tuesday #142: Using descriptive techniques to build database environments

Invitation from Frank Geisler.

In the old glory days back then it was usual that you must deal with one or two or probably three SQL Servers. As you all know these times are over. Through the rise of the cloud, every one of us must deal with more and more systems, not only Infrastructure but also Platform as a Service (PaaS) offerings. The systems themselves are getting more complex through all the new services and technologies that are involved and somehow interconnected. New movements like Azure Arc enabled Data Services bring a whole new aspect to the table where you can easily choose weather to run your data workload on your on-premises Kubernetes Cluster or in the cloud.

All these systems can easily be built with the Azure Portal, but this is not sustainable. Each time you use the portal, you must remember how to build a certain system and – and that is more important – how to apply best practices. For sure you can build e.g., an Azure SQL Database with an open endpoint into the internet and secure this by Firewall settings but this should be done with much caution because you are exposing your database to the internet. A better approach would be to build an Azure SQL Database that does not have a public endpoint but a private endpoint to an Azure V-Net which hosts the systems that must access the database, or which is connected to a local Network via VPN Gateway. As you can imagine there are a lot of moving parts to get such an environment up and running and you must remember (or document) each of these. This is very cumbersome work. There must be a better solution and for sure there is one: Scripting.

When you write a script, you are making your work once and whenever there is the same or a similar situation e.g., deploy an Azure SQL Database best practice, you can just pull out your script and there you go. This can be even taken to another level when you have a parametrization for your script that allows to just put in the parameters and let the script do the rest. Using this as a mantra I developed several scripts to build different cloud environments in PowerShell. This has the big advantage that the environment is documented as you have a script, and that the environment is versioned as well because all our scripts are saved within a source control system. The overall approach is called Infrastructure as Code (https://en.wikipedia.org/wiki/Infrastructure_as_code).

But doing imperative scripting in PowerShell also has its shortcomings. The cloud and the internet in general, is a very uncertain environment. While running the script that deploys your environment many things can happen. Your internet connection can break down, there could be an error deploying your script for whatever reason and so many other things you can think of. So you have to build many conditions in your script: If the resource group exists skip that and just build the Azure SQL Database if the V-Net exists skip that, check if all needed subnets exist and so on. Right? Wrong! Besides the imperative way of telling the Azure Resource Manager what to do you can also use a declarative approach to build resources in Azure.

This declarative approach is very common to everyone who has ever written T-SQL Code. If you write e.g., a query that selects data, you don’t instruct the database system how to retrieve the data from the underlying file structure. You only tell the system how the data you are looking for should look like: Select all the rows of data where the first name is “Frank”. This is the exact same approach that techniques like ARM-Templates, Bicep-Templates or if we are talking about Kubernetes YAML-Scripts take. The scripts are a description of how the target environment should look like. How this target environment is reached is fully up to the underlying System like the Azure Resource Manager. And there is even more: If you are changing an existent environment, only the parts that changed in the script will be altered in the target environment. Say you have an Azure SQL Database of a certain size and you change the size in your Bicep script. Next time you deploy the script the Azure SQL Database will be resized without deleting and redeploying it.

The ideal process of working with Infrastructure as Code would be that the code is checked in to Azure DevOps and that an automatic process will then deploy the changes to your target environment. To change your environment or to add resources you will only have to write the needed changes into your bicep scripts, check them in and let Azure do the magic.

My invitation to you for this month’s #tsql2sday is:

This is my invitation to you this T-SQL Tuesday to think about deploying SQL Component through descriptive Methods and of course to blog about it. It does not matter if you are using Azure and ARM-Templates or Bicep or Kubernetes and YAML. Just write about it and build some new cool Templates that implement some of your best practices infrastructure / environment wise. Or you can write an article on where you have already used descriptive scripts to build environments.

As always there is a whole lot of stuff on the internet you can use as a starting point. I summarize a little bit here:

T-SQL Tuesday #124 – Using Query Store or Not Let’s Blog

Invitation and summary from Tracy Boggiano.

Ever since Microsoft introduced Query Store I’ve been working with it, back to the CTPs in 2016.  I started presenting on it because it benefited my current company at the time.  I heard there are low adoptions rates and from a couple people implementations problems or just not having time to implement it.  After 3 years of presenting on it and writing a book about it I’m curious as to adoption rate of Query Store, but we won’t be writing about that.

For this T-SQL Tuesday, write about your experience adopting Query Store, maybe something unique you have seen, or a how your configure you databases, or any customization you done around it, or a story about how it saved the day.  Alternately, if you have not implemented yet blog about why if you are using 2016 and above, we know why if aren’t on 2016.  If you are unfortunate to be on below 2016 write about what in Query Store you are looking forward to the most once you are able to implement it.  Basically, anything related to Query Store is in for T-SQL Tuesday, hopefully everyone has read up on it and knows what it can do.

T-SQL Tuesday #070 – The Enterprise

Invitation and roundup from Jen McCown.

Here is your invitation for T-SQL Tuesday #70, and the topic is:

Strategies for managing an enterprise

We define “enterprise” in a number of ways, but I tend to default to two definitions: “the things I’m in charge of” and “anything I don’t want to do manually”.  In other words, you don’t need a large shop to have yourself an enterprise. Of course, feel free to modify the definition to what works for you.

So. How do you manage an enterprise? Grand strategies? Tips and tricks? Techno hacks? Do tell.

T-SQL Tuesday #062 – HealthySQL

Invitation and Roundup from Robert Pearl.

So, let’s get this blog party started, and kick off our international Healthy SQL campaign. Let’s spread the word to anyone and everyone managing a SQL Server Database infrastructure of the necessity to perform regular health checks on each SQL Server and repeat often.  The purpose here is to get database professionals, to ensure that all their SQL Servers are healthy, and can pass a health check. It also means that you can prove this (to heaven forbid, auditors), and back it up with documentation.

If you want to excel in your career as a data professional or DBA, then you need to be concerned about your companies’ SQLFitness.  Therefore, I am inviting all of you, to blog about your T-SQL Resolution, and describe what it is that you will do this year to make sure your SQL Servers are healthy and fit.  Now, it’s ok to ponder Healthy SQL in the abstract, but we’re looking for some technical tips on things a DBA should do to keep your SQL Servers performing well.

It could be something as simple as implementing a new monitoring software or script, updating all your SQL Servers to the latest version or service pack, setting up maintenance and optimization jobs, HA/DR, creating a performance baseline, capturing performance stats, (ie: DMV automation scripts, or MDW), a checklist ,etc.  Sky is the limit, as long as you can contribute something to the SQLCommunity that can be used in the effort to ensure SQL Fitness.