We had a 4Tb table that I converted to a partitioned table. The conversion took around 30 hours.
Partitioning was laid out over 16 filegroups, but I'll make an example for 4 filegourps.
We used computed column as a partition key.
=================================================
Add filegroups
=================================================
ALTER DATABASE DBDB ADD FILEGROUP COMMON1
ALTER DATABASE DBDB ADD FILEGROUP COMMON2
ALTER DATABASE DBDB ADD FILEGROUP COMMON3
ALTER DATABASE DBDB ADD FILEGROUP COMMON4
=================================================
Add Files to filegroups
=================================================
-- Expected size: 9G
ALTER DATABASE DBDB
ADD FILE
( NAME = COMMON1, FILENAME = 'F:\Data1\SQL\DBDB\common1.mdf', SIZE = 1GB, FILEGROWTH = 51200KB ),
( NAME = COMMON2, FILENAME = 'F:\Data2\SQL\DBDB\common2.mdf', SIZE = 1GB, FILEGROWTH = 51200KB ),
etc...
TO FILEGROUP COMMON1
go
-- Check them with
select fileid, g.groupid, groupname, f.filename from sysfiles f, sysfilegroups g
where f.groupid = g.groupid order by 3,1
=================================================
Partition the table
=================================================
DROP PRTITION SCHEME DBDBPartitioning;
DROP PARTITION FUNCTION DBDBPartitioning;
CREATE PARTITION FUNCTION DBDBPartitioning (int)
AS RANGE LEFT FOR VALUES (
199999999 ,
299999999 ,
399999999
)
go
DROP PARTITION SCHEME DBDBPartitioning;
CREATE PARTITION SCHEME DBDBPartitioning AS PARTITION DBDBPartitioningTO (
COMMON1 ,
COMMON2 ,
COMMON3 ,
COMMON4 ,
[primary])
go
=================================================
Add the computed key to be used later for partitioning- whatever suits you
=================================================
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
SET ARITHABORT ON
SET CONCAT_NULL_YIELDS_NULL ON
SET NUMERIC_ROUNDABORT OFF
SET QUOTED_IDENTIFIER ON
-- 3 minutes to run
ALTER TABLE EMPLOYEE
ADD TYPE_DATE_COMPUTED AS
EmployeeID*100000000+year(created_date)*10000+ month(created_date)*100+day(created_date)
PERSISTED not null
go
=================================================
Partition the table AKA Rebuild Cluster index
=================================================
-- 29h 41min
-- It uses 120G of log files
CREATE CLUSTERED INDEX [PK_DBDBPartitioning] ON [dbo].[EMPLOYEE]
(
[TYPE_DATE_COMPUTED]
)
WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF)
ON [DBDBPartitioning](TYPE_DATE_COMPUTED])
go
=================================================
Align other Document indexes
=================================================
Do this for all indexes:
CREATE UNIQUE NONCLUSTERED INDEX [IDX_UNIQUE_EMP]
ON [dbo].[EMPLOYEE]
(
[emp_ID] ASC,
[emp_TYPE_ID] ASC,
[emp_SEQUENCE] ASC,
TYPE_DATE_COMPUTED -- this needed to be added
)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
ON [DBDBPartitioning]([TYPE_DATE_COMPUTED])
go
=================================================
Make older filegroups READ ONLY
(if you need to)
=================================================
Unfortunately, to do this, no connections are allowed to the database
ALTER DATABASE DBDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE DBDB MODIFY FILEGROUP COMMON3 READONLY
ALTER DATABASE DBDB MODIFY FILEGROUP COMMON4 READONLY
ALTER DATABASE DBDB SET MULTI_USER
=================================================
Other partitioning commands I might possibly need
=================================================
-- To see into which partition # value 220161227 will go
SELECT $PARTITION.PIPSPartitioning (220161227)
Thursday, May 26, 2016
Restore read-only filegroup
If you are missing a file in a read-only filegroup, you can offline that file (which will of course offline the whole filegroup), and continue working with other filegroups. Any queries over global indexes will work fine.
Note: Each filegroup has a primary and secondary files. This restore applies to either primary or secondary file, as long as the filegroup is read-only.
- ALTER DATABASE dbdb MODIFY FILE (NAME = 'filename_april2', OFFLINE)
- RESTORE DATABASE dbdb FILEGROUP='april_fg' WITH RECOVERY
Note: Each filegroup has a primary and secondary files. This restore applies to either primary or secondary file, as long as the filegroup is read-only.
Monday, August 26, 2013
Login failed for user 'Admin' with SQL 18456 State 5
A client receives error when running MS Access application and querying tables linked via ODBC. The error is:
What shows up in the SQL log is:
Connection failed:
SQLState: '01000'
SQL Server Error: 53
SQLState: '08001'
SQL Server Error: 17
[Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied
What shows up in the SQL log is:
Login failed for user 'Admin'. [CLIENT: xx.xx.xx.xxx]
Error: 18456, Severity: 14, State: 5.
Possible solution:
By default, Access Jet engine tries to use its own
username to connect to SQL Server BEFORE the predefined SQL connection. "Admin" is usually defined as the default Access username. There is a registry entry that switches that default
behavior off:
Registry value: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\ODBC\TryJetAuth
Change from 1 to 0.
Thursday, June 21, 2012
Auditing in SQL Server 2005 - DDL triggers
If you need to implement server/DDL auditing in SQL 2005, you have to either use notifications or DDL triggers. This note is just about DDL triggers.
On a database level, you can audit all database events with a DDL trigger:
On a server level, if you attempt to create trigger for all server events:
you will get the following error:
That's for 2 reasons: DDL_DATABASE_EVENTS cannot be used in DDL trigger on server level and there is a bug if you use DDL_SERVER_SECURITY_EVENTS.
Workaround for both issues:
Future versions:
SQL 2008 and 2012 have a new, easy way to implement auditing functionality using
CREATE SERVER/DATABASE AUDIT [SPECIFICATION]... commands
On a database level, you can audit all database events with a DDL trigger:
CREATE TRIGGER AuditTrigger
ON DATABASE FOR DDL_DATABASE_LEVEL_EVENTS AS..
ON DATABASE FOR DDL_DATABASE_LEVEL_EVENTS AS..
On a server level, if you attempt to create trigger for all server events:
CREATE TRIGGER AuditTrigger_Server
ON ALL SERVER FOR DDL_SERVER_LEVEL_EVENTS AS...you will get the following error:
"DROP_SERVER_ROLE_MEMBER" does not support synchronous trigger registration
That's for 2 reasons: DDL_DATABASE_EVENTS cannot be used in DDL trigger on server level and there is a bug if you use DDL_SERVER_SECURITY_EVENTS.
Workaround for both issues:
CREATE TRIGGER AuditTrigger_Server
ON ALL SERVER FOR DROP_DATABASE, CREATE_DATABASE, ALTER_DATABASE, DDL_login_events, DDL_gdr_server_events, DDL_authorization_server_events AS...Future versions:
SQL 2008 and 2012 have a new, easy way to implement auditing functionality using
CREATE SERVER/DATABASE AUDIT [SPECIFICATION]... commands
Saturday, May 5, 2012
To change the default backup folder on SQL cluster
- Use SQL Mgmt Studio 2008.
- Right click on server/facets
-
Change BackupDirectory to the new valueClick OK
Tuesday, January 19, 2010
Restore Service Master Key
As a part of a DR test today, I recovered the encrypted database (log shipped) and restored the service master key from the production. When restoring the key, the following error showed up:
Msg 15320, Level 16, State 12, Line 1
An error occurred while decrypting master key 'DbName' that was encrypted by the old master key. The FORCE option can be used to ignore this error and continue the operation, but data that cannot be decrypted by the old master key will become unavailable.
Fix
Use the FORCE option. Although the documentation says that you might lose some data, as long as you have the correct service master key from the production, that will not happen. Quite the contrary, the correct master key will decrypt the database.
Tags: SQL Server error 15320, service master key, restore
Msg 15320, Level 16, State 12, Line 1
An error occurred while decrypting master key 'DbName' that was encrypted by the old master key. The FORCE option can be used to ignore this error and continue the operation, but data that cannot be decrypted by the old master key will become unavailable.
Fix
Use the FORCE option. Although the documentation says that you might lose some data, as long as you have the correct service master key from the production, that will not happen. Quite the contrary, the correct master key will decrypt the database.
Tags: SQL Server error 15320, service master key, restore
Tuesday, September 15, 2009
SQL Cluster: Resource dependencies
When installing a new clustered instance, you will probably want to assign multiple disk drives to a SQL instance. After you define all clustered resources and install instance, instance will only see the disk drive where you put your default data files. For any additional disk drives, you have to manually apply the solution below.
Solution:
In a SQL Cluster Administration, open the "SQL Server" resource, click on "Dependencies" and add the disk resource you want to be able to use.
Key words:
"SQL cluster" "Select the folder:" "can't see disk"
Solution:
In a SQL Cluster Administration, open the "SQL Server" resource, click on "Dependencies" and add the disk resource you want to be able to use.
Key words:
"SQL cluster" "Select the folder:" "can't see disk"
Subscribe to:
Posts (Atom)