When a corporate audit finishes , there’s a rush of tasks to fix the findings. High on the agenda of DBAs is the security policy. An auditor compares findings with the SQL server security policy – highlighting discrepancies. My normal procedure is to supply a report of to the server owners. The owners are known to resist – particularly if the findings suggest security is tightened. The only way forward is to set up regular internal audits – cleaning up as you go along.
I was dealing with a SQL Server performance issue this week. The Windows event viewer returned error messages related to SVC latency. I contacted the Infrastructure team – underlining the seriousness – but no reply. I make a point of maintaining good relations with infrastructure teams , but am frustrated by their unwillingness to deal with anything outside of their immediate comfort zone
An application team wants to commit a one-off data update. A requirement is to build a NonClustered Index – to force a index seek. This one off index takes about 25 minutes to build on a 100 million row table – but allows the job to finish in 3 hrs rather than 9 hrs. When the job completes , I’ll drop the index.
I’m establishing some processes for failed backups. Typically FULL BACKUPS are commited during the night. An Operations team might investigate if there are tape issues- fix and rerun. If they think it’s a SQL Server issue – they create a Service ticket – passing to the DBA team.
The question is how much authority Operations should be allowed on Production database servers ? I think if it’s a controlled , scripted and documented process – then Operations teams should be allowed access. I disagree with non-skilled staff , attempting to fix a database server problem – where I have seen some classic attempts at fixing – creating all sorts of problems. Ultimately , it depends on the circumstances, but if it’s a critical situation DBA staff should be called.
Failed logon attempts provide useful clues about other environment problems. For example, an application may be pointing to the incorrect database server. I’ve set up a daily report on SQL Server Logs to provide these attempts. Working with some app owners – I identified some issues.
Not for the first time – database compression saved the day. A Prod servers experienced critical space issues – due to unexpected ETL job. A 1.5 TB database compressed to 650GB – admittedly with a high level of VARCHAR and similar data types – which allowed the database to compress.