Thursday 30 January 2014

SQL Server DBA Responsibilities

 DBA:
        A database administrator (DBA) is a person responsible for designing, implementing, and maintaining the database system; establishing policies and procedures pertaining to the management, security, maintenance, and use of the database management system.

SQL Server DBA Responsibilities:

1. Installing or upgrading a SQL Server: Every DBA is responsible for installing SQL Servers or upgrading to an upper version of SQL Server. The DBA should know the difference between different SQL Server editions and install the required edition. He should also understand the licenses required for it.

2. Patching up the SQL Servers: After installing the SQL Server, the DBA must make sure that the SQL Server is properly patched with the correct Service Pack.

3. Database Server Health monitoring: One of the prime responsibility of a DBA is to monitor the database server for smooth operation. Like the processor is optimally utilized, the memory is sufficiently used, etc.

4. Storage availability: The database is saved on physical disk and the DBA needs to make sure that enough space is available for the database growth.

5. Performance tuning & optimization: The database needs to be tuned and optimized on a regular basis by the DBA.

6. Securing the SQL Server database: The data must be secured from unauthorized users, which the DBA makes sure by assigning proper permissions to the authorized users.

7. Backups and restoration: In case the database gets corrupt or if the server goes down, the DBA needs to recover the database with the minimum loss as quick as possible. So, the DBA needs to take the database backups regularly and when required, also needs to restores it.

8. Data transfer: In the current heterogeneous environment, the data may be needed to imported from or exported to different formats, which the DBA needs to do i.e.  SQL Server -  to / from -  Oracle / Sybase / text files / .csv format.

9. Disaster Recovery: The DBA also needs to plan and make strategies for the disaster recovery of SQL Server.

10. Deployment of SQL scripts: The DBA does the SQL code and scripts deployment to the production environment.

11. Maximum uptime of servers: The DBA needs to make sure that the SQL Servers have minimum downtime with no / minimum impact on business.

12. Documentation: This is very necessary which helps all things to be documented and handy for new DBAs or emergencies.

13. Communication: Since the DBA has to interact with different teams, he has to be very good and effective in communication.




Wednesday 29 January 2014

How to shrink Tempdb

There may come a time when you might want to shrink tempdb because it has become too large.There are a few ways you can do this and I have listed them below but please read to the end of the post before making a decision on which way you want to approach this. There is an important note at the end of the post.
So first, we’ll look at the configuration on my server
SELECT name, size
FROM sys.master_files
WHERE database_id = DB_ID(N'tempdb');

name                 size
-------------------- -----------
tempdev              1280
templog               640

(2 row(s) affected)

Note that the size column is listing the size of the file in 8Kb pages. In this instance my “tempdev” file is 10Mb (( 1280 * 8 ) = 10240 kb)
How to shrink tempdb using DBCC SHRINKFILE
The syntax is as follows and the operation does not require a restart of the sql server service.
DBCC SHRINKFILE(logical_filename, size_in_MB)
So I will shrink the file to 5Mb
DBCC SHRINKFILE(tempdev, 5);



Which produces the following output and I can see that CurrentSize is now 50% smaller than previously
DbId   FileId      CurrentSize MinimumSize UsedPages   EstimatedPages
------ ----------- ----------- ----------- ----------- --------------
2      1           640         288         176         176

(1 row(s) affected)

DBCC execution completed. If DBCC printed error messages, contact your system administrator.
name                 size
-------------------- -----------
tempdev              640
templog              640

(2 row(s) affected)
Don’t try and increase filesizes in tempdb using this command because you will see an error. In this example, the attempt was to increase to 50Mb.
Cannot shrink file ’1′ in database ‘tempdb’ to 6400 pages as it only contains 640 pages.
How to shrink tempdb using DBCC SHRINKDATABASE
The syntax is as follows and does not require a restart of the SQL Server service:
DBCC SHRINKDATABASE(tempdb, ‘target_percentage_of_free_space’);
So if the data files in tempdb had enough free space, you could shrink tempdb by running this command to leave 10% of free space at the end of the files:
DBCC SHRINKDATABASE(tempdb, 10);

How to shrink tempdb using ALTER DATABASE
You can use the ALTER DATABASE command to perform a tempdb resize. A restart of the SQL Server service will shrink tempdb to its original predefined size but you can also resize tempdb using ALTER DATABASE.
The following script will resize both the log and data file to be 100Mb.
USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, SIZE=100Mb);
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, SIZE=100Mb);
GO

How to shrink tempdb using Management Studio
You can also use Management Studio to resize Tempdb and perform shrink operations by right mouse clicking the tempdb database and choosing Tasks->Shrink.
To resize tempdb, you can set the file sizes by right mouse clicking the tempdb and choosing Properties->Files and setting the sizes there.
Note that with both the ALTER DATABASE and management studio methods, you cannot resize a tempdb file to a particular size if the data contained in the file exceed the size that you are trying to size to.
Should you shrink TempDB?

It is documented in this Microsoft article that it can cause consistency errors in the TempDB database if you perform a shrink operation while the database is in use so please read this carefully and consider whether you can shrink the database by other means, i.e restarting the SQL Server instance which will create a brand new copy of TempDB releasing the disk space.

Tuesday 28 January 2014

Transaction Log Myths

Myths :

1) A transaction log backup isn’t needed for a point in time restore. A full  database backup is enough





This myth comes from using the RESTORE command with STOPAT clause to restore from a full database backup. The STOPAT clause specifies a point in time for the RESTORE LOG command, and it works well when it’s used with a transaction log backup. The fact that it can be used with a full database backup makes you believe that transaction log backups are not needed to recover to a specific point in time

An example of T-SQL code for restoring the AdventureWorks database to December 31st 2013 10:59 PM

RESTORE DATABASE AdventureWorks FROM DISK = 'D:\AdventureWorks.bak'
WITH NORECOVERY
GO
RESTORE LOG AdventureWorks FROM DISK = 'D:\AdventureWorks.bak'
WITH RECOVERY,
STOPAT = 'Dec 31, 2013 10:59:00 PM'
GO

Although the database cannot be restored to a point in time, SQL Server doesn’t clearly identify the problem, and it allows you to use the STOPAT clause without a transaction log backup specified.







2) Can SQL Server database work without a transaction log?


No, that is not possible due to the SQL Server design and ACID compliance. ACID stands for atomicity, consistency, isolation, and durability. All the transactions must fulfil these characteristics

  • An atomic transaction is either fully completed, or is not begun at all
  • A transaction enforces consistency in the system state by ensuring that at the end of any transaction the system is in a valid state
  • When a transaction runs in isolation, it appears to be the only action that the system is carrying out at one time
  • A transaction is durable meaning that once it has been successfully completed, all of the changes it made to the system are permanent 










3) Do I need SQL Server transaction log backups?


Yes, that is one of the most important resource when it comes to disaster recovery. They are not needed (and available) only if the Simple recovery model is used – but there is data loss exposure. Majority of Database Administrators uses 15 minutes interval or even less for a high traffic SQL Server databases. The transaction log backups are important because when taken they mark inactive VLFs that can be used for writing down new transactions.






4) A full or differential database backup clears the online transaction log



Not true, they don’t clear the online transaction log. The inactive parts of the online transaction log are marked for clearing only when a transaction log backup is created

Full and differential database backups don’t contain much transaction log information, only the transactions necessary to recover the database into a consistent state. These transactions are not a backup of the online transaction log, therefore these transactions are not marked for overwriting in the online transaction log





5) SQL Server transaction log backups are not needed for successful disaster recovery if the full database backup is taken daily 



It also depends on how much data you can lose. If you can afford to lose up to 24 hours of data, then you don’t need transaction log backups and you should use the Simple recovery model

If the information you can lose is measured in minutes and hours, regular transaction log backups are necessary, as the maximum you will lose is the time between to transaction log backups 








6) The TRUNCATE TABLE and DROP TABLE commands are not logged into the online transaction log





The exact deleted values are not logged in the online SQL Server transaction log, only the IDs of the pages that held the truncated records are logged. These pages are marked for overwriting in the database data file and the truncated data will be gone for good when the new transactions are written to these pages. This myth is also based on the fact that these commands take little time to execute, they are almost instantaneous







7) The transaction log shrinking will make free space in the online transaction log so I don’t need to take the transaction log backup


Not true, the transaction log will grow again. The shrink operation is not a good maintenance practice because it doesn’t solve the transaction log size issue permanently. After the initial shrinking, the transaction log will grow again. As the auto-growth event is one of the most intensive SQL Server operations, it should be avoided. The recommended method to keep the size of the online transaction log is to create transaction log backups regularly. Or, switching to the Simple recovery model, if you can tolerate data loss.










8) A transaction log backup will be the same size as the online transaction log itself


Not true, it will be smaller. The online transaction log must have enough information to rollback active transactions, so some space is reserved for eventual rollbacks. If a rollback occurs, SQL Server doesn’t want to expand the online transaction log because if the expanding fails, the database can become inconsistent or go into the Suspect mode. That’s why the online transaction log has some reserved space and is usually bigger than the transaction log backup. Moreover, a transaction log backup contains only the transactions made after the last transaction log backup. If the online transaction log contains the transactions that have already been backed up, they will not be present in the new transaction log backup, therefore the transaction log backup will be smaller for that amount of space



9) The transaction log won’t grow if the database is in the Simple recovery model


Not true, it will. However, it happens just in some specific situations – when there is a long running transaction or transaction that creates many changes

In the Simple recovery model, the online transaction log is cleared automatically. SQL Server automatically reclaims log space to keep space requirements small – but that doesn’t mean it won’t grow. The online transaction log must provide enough information for a database rollback, therefore it must provide enough space for all necessary information. As all transactions must be written into the online transaction log, in case of a large number of changes in a transaction, there might not be enough space in the log, so it must be expanded 









10) Having multiple online transaction log files will increase performance 



Not true, it will not increase performance. This myth is based on the belief that having multiple online transaction log files will result in parallel writing of the transactions into the files and therefore result in performance gain. SQL Server can’t operate with more than one online transaction log file at the time, so any kind of parallel I/O is not possible

Having multiple transaction log files is needed only in the situations where the initial transaction log can’t record more transactions due to a lack of free space on the disk


11) Regular log shrinking is a good maintenance practice?


FALSE. Log growth is very expensive because the new chunk must be zeroed-out. All write activity stops on that database until zeroing is finished, and if your disk write is slow or autogrowth size is big, that pause can be huge and users will notice. That’s one reason why you want to avoid growth. If you shrink the log, it will grow again and you are just wasting disk operation on needless shrink-and-grow-again game. You could shrink after some extraordinary event that blew-up log size, but never shrink to a size smaller than is normal operational size for that database

12) There are some myths widely spread about transaction log that are :
  • Full/diff backup will clear the transaction log
FALSE. Only transaction log backup in full and bulk_logged recovery model, or checkpoint in simple recovery model will trigger the log clearing process. Full and diff backups will only defer it until the backup finishes. If you only do a full/diff backup in full recovery model, your log will grow until the disk is full.
  • Log truncate will make it smaller
FALSE. Log truncate just scans active VLFs (virtual log files) and those that are not needed anymore by anyone marks as inactive. That does not change the log file size.
  • Log shrink will make it smaller
It depends. DBCC SHRINKFILE on the log can only chop-off part of the log file after the last active VLF. If the last VLF in file is active, file shrink will not make the file any smaller.
  • Instant file initialization will speed-up log file allocations (growth, initialization)
FALSE. Instant file initialization only works for data files, not for log files. Log files must be zeroed-out because of recovery process. It helps the recovery process after a crash to find where the log records really ends: when it bumps into zeros or alternate 64/128 value.
  • Multiple log files will make performance gains
FALSE. Transaction log file is always written sequentially, in a single thread. Multiple files won’t parallelize log writing. However, you could use additional files if for example, you run out of space on current log disk to add a new log file on the other disk.
  • My database is in simple recovery model, so the log will not grow
FALSE. Transaction log is used in every recovery model, including simple. It will just be cleared more often (on each checkpoint, instead on each log backup) and some operations will be bulk-logged. It is important to properly size the transaction log even for databases in simple recovery model.
  • It is a good guideline to determine the log size as percentage of database size
FALSE. Log size is determined by the maximum amount of log generated between two log clearings. For example, small database that does lot of data changes (e,g, in-place updates) can generate 100x more log than database size. Log must also fit the biggest transaction, e.g. biggest table rebuild.
  • Regular log shrinking is a good maintenance practice
FALSE. Log growth is very expensive because the new chunk must be zeroed-out. All write activity stops on that database until zeroing is finished, and if your disk write is slow or autogrowth size is big, that pause can be huge and users will notice. That’s one reason why you want to avoid growth. If you shrink the log, it will grow again and you are just wasting disk operation on needless shrink-and-grow-again game. You could shrink after some extraordinary event that blew-up log size, but never shrink to a size smaller than is normal operational size for that database.
  • TRUNCATE TABLE is not logged
FALSE. Extents (and up to 8 pages in mixed extents, per table/index/partition) are deallocated from the table and that is logged. But it generates far less log than DELETE command.
  • Minimally logged operation can only happen in BULK_LOGGED recovery model
FALSE. It can occur in SIMPLE mode also.
  • Every change is logged, regardless of recovery model
TRUE with one exception. The only operation that is not logged is on one part of tempdb database called row version store, used for row versioning.
  • Log records are written to disk immediately
FALSE. There is a 60KB log cache in the RAM that log records are written to, not the disk. That cache is written to the disk, to the transaction log file, as soon as all 60KB is filled, any session of that database issues a commit (a real commit, not the nested one that just decreases @@TRANCOUNT), and on some other events. All log records flushed together to the disk will have the same block offset (LSN consists of VLF number, block offset, and log record number). Therefore, log block cannot be larger than 60KB (cache’s size).



Friday 24 January 2014

Reset Sa Password in SQL Server 2008, 2008 R2

Below steps helps to reset the "sa" password if you forgot or lost without install SQL Server 




1. Login into server using Administrator login

2. Open SQL Server Configuration Manager

3. Select SQL Service - Properties - Advanced -- Need to add (-m;) parameter to startup parameter of the
SQL Service, to start SQL Server in single user mode to reset the "sa" password
Note - Dont give any space after semicolon. Add -m; parameter starting itself it looks like below
-m;-dc:\program..........

4 Restart the SQL Service

5. Open Command Prompt

6. Run SQLCMD
If you have two instances running on server say SQL 2008 R2, SQL 2012, then issue command as below

7. SQLCMD  -bhamare3\Named

After invoking the SQLCMD it should display 1> prompt indicates login into server with admin login


Possible Errors:

You may receive the below error when you run SQLCMD from command prompt
Login failed for domain\user or user doesnot have sysadmin permissions

If you got the above error please stop the SQL Service and change the built-in account to Local System and start the service in single user mode and run the below steps

1> select @@servername
2>go

1> create login [domain\PC3] for windows;
2> go

1> sp_addsrvrolemember 'domain\PC3','sysadmin';
2> go


If NP (named pipes) is not enabled in your system you may encounter the below error

C:\Users\bhamare3>sqlcmd
HResult 0x2, Level 16, State 1
Named Pipes Provider: Could not open a connection to SQL Server [2].

Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online..Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : Login timeout expired.

Then Enable the Named Pipes protocol from SQL Server configuration manager then again invoke 7th step
Issue the below commands to enable / reset sa password

1> select @@servername --> Make Sure Instance name is correct
2> go
1> alter login sa with password='Dhiraj@123'
2> go
1> exit
1> alter login sa enable
2> go

Remove the (-m;) parameter from the SQL Server startup parameters, Restart the SQL Service and then Open SSMS with SQL Authentication. Then change the sa password as you like.