| A community of more than 1,600,000 database professionals and growing |
| | Pause and Backtrack One of the main functions for anyone that manages a database is ensuring that they can recover the system in the event of any issues. My view is that restores are the most important skill and task that need to be performed on a database. Since restores require backups, I'd then rank backups as a 1a important task. They're a dependency and necessity to ensure that we can restore data. Having a set of the data, in a transactionally consistent state just feels critically important to me, over everything else.. I saw this new feature from Amazon Aurora for their MySQL compatible database. You can use Backtrack to rewind your database to a previous point in time. On one hand that's an amazing feature. Make a mistake, have an error, click a few buttons and get the database restored back to the minute (or second) when you made a mistake. On the other hand, if you delete a table, do you want to roll all tables back to that point in time? This seems like an amazing feature. Amazon takes some of the hassles of managing some backups backups. You determine how far back you want to go, in hours, up to three days. Depending on the activity in your database, they charge differently. To me, that's interesting. It makes sense to me as a customer. I do more, they track more, I pay more. This also seems to be a way to capture more money for Amazon by cutting some of the consumer surplus that exists with flat fee pricing, which is something many of us prefer. The way this works is also different than Azure. The Azure point in time feature allows you to go back, but you can't restore on top of your existing database. You'd need to restore elsewhere, then play the rename game or move data between databases. While that seems inconvenient, if you've ever had someone restore a local SQL Server backup over a database you needed, you might appreciate the safeguards of not allowing a restore on top of an existing database. While the process might seem like a hassle, this does help prevent mistakes during a stressful situation. Which of these do I like? I prefer the Azure one, though I'd like the restores to be more granular than a minute. The reason is that I rarely want to restore in a disaster over the existing database. In most applications I've managed, there are updates to multiple parts of the database. A mistake in one table doesn't necessarily mean that data changed in other tables should be discarded. Even during deployments, when things go wrong, I've often just broken one set of tables and rolling back the entire database in a restore is painful. Usually I'd prefer to undo what I can and get the any missing data from a restored copy of my database. Perhaps it's just me, but I find the idea of allowing clients, or even many technical people, to easily roll back an entire database after a mistake to be very dangerous. By the time we recognize the mistake, verify data, notify others, we might have lots of changes in many tables. Abandoning that data for the sake of convenience is something that's unnecessary. I also worry many people trying this feature don't think through the implications of rolling back an entire database. If you feel differently, let me know. There are cases this is certainly helpful, but I think I'd rather have a "restore to a new db and rename both" automated task instead of AWS Backtrack. Steve Jones from SQLServerCentral.comJoin the debate, and respond to today's editorial on the forums |
| The Voice of the DBA Podcast Listen to the MP3 Audio ( 4.3MB) podcast or subscribe to the feed at iTunes and Libsyn. The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. | |
|
|
| ADVERTISEMENT | | NEW SQL Provision: Create, protect, & manage SQL Server database copies for compliant DevOps Create and manage database copies effortless and keeps compliance central to the process. With SQL Provisions virtual cloning technology, databases can be created in seconds using just MB of storage, enabling business to move faster. Sensitive data can be anonymized or replaced with realistic data to ensure data is protected as it moves between environments. Download your free trial |
| | The industry standard for comparing and deploying SQL Server database schemas Trusted by 71% of Fortune 100 companies, SQL Compare is the fastest way to compare changes, and create and deploy error-free scripts in minutes. Plus you can easily find and fix errors caused by database differences. Download your free trial |
|
|
|
| | | Hugo Kornelis from SQLServerCentral.com This level looks in detail at what happens when we update or delete data from a clustered columnstore index, the impact it has on concurrent data access, and how without careful maintenance the efficiency of columnstore indexes can degrade over time. More » |
| Additional Articles from Redgate Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA's troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions. More » |
| Additional Articles from SimpleTalk In the final article of this series, Robert Sheldon shows how to move from a relational structure to a graph structure using the Graph Database feature. More » |
| Andrew Pruski from SQLServerCentral Blogs The Azure Container Registry is an online repository for storing Docker images (think the Docker Hub). What’s cool about this is... More » |
| Randolph West from SQLServerCentral Blogs Previously we looked at four built-in functions to get the current date and time in SQL Server and Azure SQL... More » |
|
|
| | Today's Question (by Steve Jones): I build this function that takes multiple parameters. def AwardRibbons(first, second, third, fourth): print("Thanks for participating", fourth) print("The bronze goes to ", third) print("The silver goes to ", second) print("The gold goes to ", first) When I decide to call this, I include parameters, but some I name and some I do not. What is the order that I must follow for the parameters in my function call? |
Think you know the answer? Click here, and find out if you are right. We keep track of your score to give you bragging rights against your peers. This question is worth 1 point in this category: Python. We'd love to give you credit for your own question and answer. To submit a QOTD, simply log in to the Contribution Center. |
|
|
| |
ADVERTISEMENT | Design and configure SQL Server instances and databases in support of high-throughput applications that are mission-critical and provide consistent response times in the face of variations in user numbers and query volumes. Learn to configure SQL Server and design your databases to support a given instance and workload. Pick up your copy of this great book today at Amazon today. |
|
|
|
|
|
| Yesterday's Question of the Day |
| Yesterday's Question (by Junior Galvão - MVP): A page is considered "suspect" when the SQL Server Database Engine encounters one of the following errors when it tries to read a data page. Select the three alternatives that you consider to be correct: Answer: An 823 error that was caused by a cyclic redundancy check (CRC) issued by the operating system, such as a disk error (certain hardware errors) An 824 error, such as a torn page (any logical error) A query has to read a page. Explanation: Answer: All the alternatives are correct. Explanation: A page is considered "suspect" when the SQL Server Database Engine encounters one of the following errors when it tries to read a data page: - An 823 error that was caused by a cyclic redundancy check (CRC) issued by the operating system, such as a disk error (certain hardware errors); - An 824 error, such as a torn page (any logical error) The page ID of every suspect page is recorded in the suspect_pages table. The Database Engine records any suspect pages encountered during regular processing, such as the following: - A query has to read a page; - During a DBCC CHECKDB operation; - During a backup operation; - The suspect_pages table is also updated as necessary during a restore operation, a DBCC repair operation, or a drop database operation; and - The suspect_pages table contains one row per page that failed with an 824 error, up to a limit of 1,000 rows. The following table shows errors logged in the event_type column of the suspect_pages table. References: click here » Discuss this question and answer on the forums |
|
|
| Database Pros Who Need Your Help |
| Here's a few of the new posts today on the forums. To see more, visit the forums. Cursors - In what situations should i use cursors over joins. I know that joins are much more faster than using cursors,... Return 2 lines of text from a row - My SQL query is: SELECT * FROM .. WHERE = 0 AND ='James' This pulls all the correct data... Error Processing Cube from SSIS package run as a SQL Job - Hi, I am trying to upgrade a SQL server with Analysis Services to SQL 2017 cu5, 14.0.3023.8, (I tried CU4 before... Backup - Hi , Daily database backups are running, but there is no sql job. And backups are saving in physical drive. How to find... Upgrade edition of SQL Server 2016 SP1 from Developer to Standard - Hi all Having a strange issue. We have a production server which incorrectly got installed with Developer edition. The upgrade path... A better way than multiple subqueries? - I have a table that contains a single record for each of the (several thousand) devices we manage, and a... Regex in SQL - Greetings, I am what you will call an accidental DBA. I was a Systems Engineer and now I have to watch... Query VCentreDatabase for VM stats - I am looking at many posts to find a script that can query VCDB to get CPU,Memory Utilization. I developed a... How to fix this deadlock? - H All, We are seeing a deadlock happening repeatedly in our environment. Using SQL 2014. Was able to get the xml deadlock... Cannot insert duplicate key row in object - Hi, I'm on SQL Server 2014 enterprise edition. I'm running into the following error when attempting to run a MERGE statement: "Cannot... Restoring database A also tries to Restore database B - Hi Guys, Had a very strange is happen to me, I was required to restore our production db to a specific... Why would I get a difference in using a inner join vs intersect? - Hi, I have two tables that I want to see what records are in both tables. So, I used an intersect... No alternative but row-by-row processing? - I know how to do this with a loop or cursor, but wondered if there was a brilliant mind out... Index Fragmentation showing up blank - Hi All, I'm checking indexes on a table and it seems to show up blank for all indexes in that... simple select * from table takes around 15 secons - I have a table with like 115+ columns and it got the data around 76000 records I ran the query, Select... updating sql via excel. - 3rd party provider has a requirement = user needs to update sql tables via excel. Possible it seems with some VBA... Stored procedure running in loop is taking time to load . Is there any alternate way to run in a single execution ? - Hi All, I have an application where we will be showing the invoice and Plan backup reports for each recipient in... Stored Procedure output to Excel - Could use some help on this one: I have a SQL Server 2008 64bit and try to put data in... Report design where number of columns is unknown - Hi, I have a query based on pivot and dynamic SQL that results in the number of columns returned being... How To Check if Job is Running? - Does anyone know how to write the T-SQL that checks if a job is still running? I need to make... |
|
| This email has been sent to [email protected]. To be removed from this list, please click here. If you have any problems leaving the list, please contact the [email protected]. | This newsletter was sent to you because you signed up at SQLServerCentral.com. Feel free to forward this to any colleagues that you think might be interested. If you have received this email from a colleague, you can register to receive it here. | This transmission is ©2018 Redgate Software Ltd, Newnham House, Cambridge Business Park, Cambridge, CB4 0WZ, United Kingdom. All rights reserved. Contact: [email protected] |
|
|