Pergi ke luar talian dengan aplikasi Player FM !
Database Sharding: Part 2
Manage episode 436479566 series 3560727
00:00
Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!
00:26
Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.
Nikita: Hi everyone! In our last episode, we dove into database sharding and Oracle Database Sharding in particular. If you haven’t listened to it yet, I’d suggest you go back and do so before you listen to this episode because it will give you a lot of context.
00:53
Lois: Right, Niki. Today, we will discuss all the 23ai new features related to database sharding. We will cover sharding native replication, directory-based sharding, coordinated backup and restore for sharded databases, and a few more.
Nikita: And we’re so happy to have Ron Soltani back on the podcast. If you don’t already know him, Ron is a Senior Principal Database & Security Instructor with Oracle University. Hi Ron! Let’s talk about sharding native replication, which is RAFT-based, meaning that it is reliable and fault tolerant-based, usually providing subzero or subsecond zero data loss replication support. Tell us more about it, please.
01:33
Ron: This is completely transparent replication built in within Oracle sharding that duplicates data across the different shards. So data are generally put into chunks. And then the chunks are replicated either between three or five different shards, depending on how much of the fault tolerance is required. This is completely provided by the Oracle sharding database, and does not require use of any other component like GoldenGate and Data Guard. So if you remember when we talked about the architecture, we said that each shard, each database can have a Data Guard component, whether through GoldenGate or whether through Data Guard to have a standby. And that way support high availability with the sharding native replication, you don't rely on the secondary database. You actually-- the shards will back each other up by holding replicas and being able to globally manage the replica, make sure everything is preserved, and manage all of the fault operations. Now this is a logical replication, generally consensus-based, kind of like different components all aware of each other. They know which component is good, depending on the load, depending on the failure. The sharded databases behind the scene decide who is actually serving the data to the client. That can provide subsecond failovers with zero data loss.
03:15
Lois: And what are the benefits of this?
Ron: Major benefits for having sharding native replication is that it is completely transparent to the application or any of the structures. You just identify that you want to go ahead and use this replication and identify the replication factor. The rest is managed by the Oracle sharded database behind the scene. It supports fast failover with zero data loss, usually subsecond failovers. And depending on the number of replicas, it can even tolerate multiple failures like two server failures.
And when the loads are submitted, the loads are also load-balanced across all of these shards based on where the data is located, based on the replicas. So this way, it can also provide you with a little bit of a better utilization of the hardware and load administration.
So generally, it's designed to help you keep your regular SQL-based databases without having to resolve to FauxSQL or NoSQL environment getting into other databases.
04:33
Nikita: So next is directory-based sharding. Can you tell us what directory-based sharding is, Ron?
Ron: Directory-based sharding basically allows the user to define the values that are used and combined for different partition, so better control, location of the data, in what partition, what shard. So this allows you to set up a good configuration.
Now, many times we may have a key that may not be large enough for hash partitioning to distribute the data enough. Sometimes we may not even know what keys are going to come in the future. And these need to be built in the future. So having to build these, you really don't want to have to go reorganize the whole data based on new hash functions, and so when data cannot be managed and distributed using hash partitioning or when we need full control over combination of where data exists.
05:36
Lois: Can you give us a practical example of how this works?
Ron: So let's say our company is very small in three different countries. So I can combine those three countries into one single shard. And then have three other big countries, each one sitting in their own individual shards. So all of this done through this directory-based sharding. However, what is good about this is the directory is created, which is a table, created behind the scene, stored in the catalog, available to the client that is cached with them, used for connection mapping, used for data access. So it can give you a lot of very high-level benefits.
06:24
Nikita: Speaking of benefits, what are the key advantages of using directory-based sharding?
Ron: First benefit allow you to group the data together based on the whatever values you want, depending on what location you want to put them as far as across the shards are concerned. So all of that is much better and easier controlled by us or by the designers. Now, this is when there is not enough values available. So when you're going to use hash-based partition, that would result into an uneven distribution of the data. Therefore, we may be able to use this directory for better distribution of the data since we understand the data structure better than just the hash function. And having a specification where you can go ahead and create future component, future partitions, depending on how large they're going to be. Maybe you're creating them with an existing shard, later put them in another shard. So capability of having all of those controls become essential for management of this specific type of data. If a shard value, the key value is required, for example, as we said, client getting too big or can use the key value, split it or get multiple key value. Combine them. Move data from one location to another. So all of these components maintain automatically behind the scene by us providing the changes. And then the directory sharding and then the sharded database manages all of the data structure, movement, everything behind the scene using some of the future functionalities.
And finally, large chunk of data, all of that can then be moved from one location to another. This is part of the automatic chunk data move and whatnot, but utilized within the directory-based sharding to allow us the control of this data and how we're going to move and manage the data based on the load as the load or the size of the data changes.
08:50
Lois: Ron, what is the purpose of the coordinated backup and restore system in Oracle Database Sharding?
Ron: So, basically when we talk about a coordinated backup and restore, remember in a sharded database, I have different databases. Each database is a shard. When you take a backup, each database creates its own backup.
So to have consistent data across all of the shards for the whole schema, it is extremely important for these databases to be coordinated when the backup is taken, when the restore is being done. So you have consistency of the data maintained across all of the shards.
09:28
Nikita: So, how does this coordination actually happen?
Ron: You don't submit this through our main. You submit this through the Global Management tool that is used for the sharded database. And it's the Global Management tool that is actually submit your request to each database, but maintains the consistency of when the actual backup is taken, what SCN. So that SCN coordination across all of the shards is then maintained for the backup so you can create a consistent backup or restore to a consistent point in time across the sharded database. So now this system was enhanced in 23C to support multiple destinations.
So you can now send your backup to an object store. You can send it to ZDLRA. You can send it to Amazon S3. So multiple locations can now be defined where you can send these backups to. You can also use multiple recovery catalogs. So let's say I have data that is located on different countries and we have requirement that data for each country must stay in that country. So I need to also use a separate catalog to maintain that partition.
So now I can use multiple catalog and define which catalog is maintaining which partition to satisfy those type of requirements or any data administration requirement when it comes to backup recovery. In addition, you can also now specify different type of encryption to be used, whether you want to have different type of encryption algorithm for each of the databases that you're backing up that is maintained. It can be identified, and then set up for each one of those components.
So these advancements now allow you to manage this coordinated backup and restore with all of the various specific configuration that may be required based on the data organization. So the encryption, now can also be done across that, as I mentioned, for different algorithms. And you can define different components.
Finally, there is much better error handling and response available through this global system. Since things have been synchronized, you get much better information into diagnosing any issues.
12:15
Want to get the inside scoop on Oracle University? Head over to the Oracle University Learning Community. Attend exclusive events. Read up on the latest news. Get first-hand access to new products. Read the OU Learning Blog. Participate in challenges. And stay up-to-date with upcoming certification opportunities. Visit www.mylearn.oracle.com to get started.
12:41
Nikita: Welcome back! Continuing with the updates… next up is the automatic bulk data move on sharding keys. Ron, can you explain how this works and why it's significant?
Ron: And by the way, this doesn't have to be a bulk data. This could be just an individual row or it could be bulk data, a huge piece of data that is going to be moved.
Now, in the past, when the shard key of an existing record was going to be updated, we basically had to remove that row from the table, so moving it to a temporary table or moving it to another location. Basically, you're deleting the row, and then change the value and reinsert the row so the row would then be inserted into the proper location.
That causes a lot of work and requires specific code-writing and whatnot to manage those specific type of situations. And of course, if there is a lot of data, now, you're moving those bulk data in twice.
13:45
Lois: Yeah… you’re moving it to one location and then moving it back in. That’s a lot of double work, not to mention that it all needs to be managed manually, right? So, how has this process been improved?
Ron: So now, basically, you can just go ahead and update the value of the partition key, and then data will then automatically move to the new location. So this gives you complete flexibility of the shard key values.
This is also completely transparent, and again, completely managed behind the scenes. All you do is identify what is going to be changed. Then the database will maintain the actual data location and movement behind the scenes.
14:31
Lois: And what are some of the specific benefits of this feature?
Ron: Basically, it allows you to now be flexible, be able to update the shard key without having to worry about, oh, which location does this value have to exist? Do I have to delete it, reinsert it? And all of those different operations.
And this is done automatically by Oracle database, but it does require for you to enable row movement at the table level. So for tables that are expected to have partition key updates kind of without knowing when that happens, can happen, any time it happens by the clients directly or something, then we may need to enable row movement at the table level and leave it enabled. It does have tiny bit of overhead of maintaining these row locations behind the scenes when enabled, as it maintains some metadata behind the scenes.
But for cases that, let's say I know when the shard key is going to be changed, and we can use, let's say, a written procedure or something for that when the particular shard key is going to be changed. Then when the shard key is updated, the data will then automatically move to the new location based on that shard key operation. So we don't need to move the data manually in and out or to different locations.
16:03
Nikita: In our final segment, I want to bring up the update on splitting and moving a partition set, or basically subpartitioning tables and then being able to move all of the data associated with that in a bulk data move to a new location. Ron, can you explain how this process works?
Ron: This gives us a lot of flexibility for data management based on future requirements, size of the data, key changes, or key management requirements.
So generally when we use a composite sharding, remember, this is a combination of user-defined partitioning plus the system partitioning put together. That kind of defines a little bit more control over how the shards are, where the data is distributed evenly across the shards.
So sometimes based on this type of configuration, we may actually need to split partition and that can cause the shard key values to be now assigned to a new shard space based on the partitioning reconfiguration. So data, this needs to be automatically managed. So when you go ahead and split partition or partitionsets, then the data based on your configuration, based on your identification can automatically move to the new location automatically between those shard spaces.
17:32
Lois: What are some of the key advantages of this for clients?
Ron: This provides a huge benefit to clients because it allows them flexibility of better managing their configuration, expanding both configuration servers, the structures for better management of the data and the load. Data is completely online during all of this data move. Since this is being done behind the scenes by the database, it does not impact the availability of the data for anyone who is actually using the data.
And then, data is generally moved using transportable tablespaces in big bulk and big chunks. So it's almost like copying portions of the files. If you remember in Oracle database, we could take a backup of big files as image copy in pieces. This is kind of similar where chunks of data can then be moved and then transported if possible depending on the organization of the data itself for those particular partitions.
18:48
Lois: So, what does it look like in practice?
Ron: Well, clients now can go ahead and rearrange their data structure based on the adjustments of the partitioning that already exists within the sharded database. The bulk data move then automatically triggers once the customer execute the statement to go ahead and restructure the partitioning. And then all of the client, they're still accessing data. All of the data operation are completely maintained behind the scene.
19:28
Nikita: Thank you for joining us today, Ron. If you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Nikita Abraham…
Lois: And Lois Houston signing off!
19:51 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
90 episod
Manage episode 436479566 series 3560727
00:00
Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!
00:26
Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.
Nikita: Hi everyone! In our last episode, we dove into database sharding and Oracle Database Sharding in particular. If you haven’t listened to it yet, I’d suggest you go back and do so before you listen to this episode because it will give you a lot of context.
00:53
Lois: Right, Niki. Today, we will discuss all the 23ai new features related to database sharding. We will cover sharding native replication, directory-based sharding, coordinated backup and restore for sharded databases, and a few more.
Nikita: And we’re so happy to have Ron Soltani back on the podcast. If you don’t already know him, Ron is a Senior Principal Database & Security Instructor with Oracle University. Hi Ron! Let’s talk about sharding native replication, which is RAFT-based, meaning that it is reliable and fault tolerant-based, usually providing subzero or subsecond zero data loss replication support. Tell us more about it, please.
01:33
Ron: This is completely transparent replication built in within Oracle sharding that duplicates data across the different shards. So data are generally put into chunks. And then the chunks are replicated either between three or five different shards, depending on how much of the fault tolerance is required. This is completely provided by the Oracle sharding database, and does not require use of any other component like GoldenGate and Data Guard. So if you remember when we talked about the architecture, we said that each shard, each database can have a Data Guard component, whether through GoldenGate or whether through Data Guard to have a standby. And that way support high availability with the sharding native replication, you don't rely on the secondary database. You actually-- the shards will back each other up by holding replicas and being able to globally manage the replica, make sure everything is preserved, and manage all of the fault operations. Now this is a logical replication, generally consensus-based, kind of like different components all aware of each other. They know which component is good, depending on the load, depending on the failure. The sharded databases behind the scene decide who is actually serving the data to the client. That can provide subsecond failovers with zero data loss.
03:15
Lois: And what are the benefits of this?
Ron: Major benefits for having sharding native replication is that it is completely transparent to the application or any of the structures. You just identify that you want to go ahead and use this replication and identify the replication factor. The rest is managed by the Oracle sharded database behind the scene. It supports fast failover with zero data loss, usually subsecond failovers. And depending on the number of replicas, it can even tolerate multiple failures like two server failures.
And when the loads are submitted, the loads are also load-balanced across all of these shards based on where the data is located, based on the replicas. So this way, it can also provide you with a little bit of a better utilization of the hardware and load administration.
So generally, it's designed to help you keep your regular SQL-based databases without having to resolve to FauxSQL or NoSQL environment getting into other databases.
04:33
Nikita: So next is directory-based sharding. Can you tell us what directory-based sharding is, Ron?
Ron: Directory-based sharding basically allows the user to define the values that are used and combined for different partition, so better control, location of the data, in what partition, what shard. So this allows you to set up a good configuration.
Now, many times we may have a key that may not be large enough for hash partitioning to distribute the data enough. Sometimes we may not even know what keys are going to come in the future. And these need to be built in the future. So having to build these, you really don't want to have to go reorganize the whole data based on new hash functions, and so when data cannot be managed and distributed using hash partitioning or when we need full control over combination of where data exists.
05:36
Lois: Can you give us a practical example of how this works?
Ron: So let's say our company is very small in three different countries. So I can combine those three countries into one single shard. And then have three other big countries, each one sitting in their own individual shards. So all of this done through this directory-based sharding. However, what is good about this is the directory is created, which is a table, created behind the scene, stored in the catalog, available to the client that is cached with them, used for connection mapping, used for data access. So it can give you a lot of very high-level benefits.
06:24
Nikita: Speaking of benefits, what are the key advantages of using directory-based sharding?
Ron: First benefit allow you to group the data together based on the whatever values you want, depending on what location you want to put them as far as across the shards are concerned. So all of that is much better and easier controlled by us or by the designers. Now, this is when there is not enough values available. So when you're going to use hash-based partition, that would result into an uneven distribution of the data. Therefore, we may be able to use this directory for better distribution of the data since we understand the data structure better than just the hash function. And having a specification where you can go ahead and create future component, future partitions, depending on how large they're going to be. Maybe you're creating them with an existing shard, later put them in another shard. So capability of having all of those controls become essential for management of this specific type of data. If a shard value, the key value is required, for example, as we said, client getting too big or can use the key value, split it or get multiple key value. Combine them. Move data from one location to another. So all of these components maintain automatically behind the scene by us providing the changes. And then the directory sharding and then the sharded database manages all of the data structure, movement, everything behind the scene using some of the future functionalities.
And finally, large chunk of data, all of that can then be moved from one location to another. This is part of the automatic chunk data move and whatnot, but utilized within the directory-based sharding to allow us the control of this data and how we're going to move and manage the data based on the load as the load or the size of the data changes.
08:50
Lois: Ron, what is the purpose of the coordinated backup and restore system in Oracle Database Sharding?
Ron: So, basically when we talk about a coordinated backup and restore, remember in a sharded database, I have different databases. Each database is a shard. When you take a backup, each database creates its own backup.
So to have consistent data across all of the shards for the whole schema, it is extremely important for these databases to be coordinated when the backup is taken, when the restore is being done. So you have consistency of the data maintained across all of the shards.
09:28
Nikita: So, how does this coordination actually happen?
Ron: You don't submit this through our main. You submit this through the Global Management tool that is used for the sharded database. And it's the Global Management tool that is actually submit your request to each database, but maintains the consistency of when the actual backup is taken, what SCN. So that SCN coordination across all of the shards is then maintained for the backup so you can create a consistent backup or restore to a consistent point in time across the sharded database. So now this system was enhanced in 23C to support multiple destinations.
So you can now send your backup to an object store. You can send it to ZDLRA. You can send it to Amazon S3. So multiple locations can now be defined where you can send these backups to. You can also use multiple recovery catalogs. So let's say I have data that is located on different countries and we have requirement that data for each country must stay in that country. So I need to also use a separate catalog to maintain that partition.
So now I can use multiple catalog and define which catalog is maintaining which partition to satisfy those type of requirements or any data administration requirement when it comes to backup recovery. In addition, you can also now specify different type of encryption to be used, whether you want to have different type of encryption algorithm for each of the databases that you're backing up that is maintained. It can be identified, and then set up for each one of those components.
So these advancements now allow you to manage this coordinated backup and restore with all of the various specific configuration that may be required based on the data organization. So the encryption, now can also be done across that, as I mentioned, for different algorithms. And you can define different components.
Finally, there is much better error handling and response available through this global system. Since things have been synchronized, you get much better information into diagnosing any issues.
12:15
Want to get the inside scoop on Oracle University? Head over to the Oracle University Learning Community. Attend exclusive events. Read up on the latest news. Get first-hand access to new products. Read the OU Learning Blog. Participate in challenges. And stay up-to-date with upcoming certification opportunities. Visit www.mylearn.oracle.com to get started.
12:41
Nikita: Welcome back! Continuing with the updates… next up is the automatic bulk data move on sharding keys. Ron, can you explain how this works and why it's significant?
Ron: And by the way, this doesn't have to be a bulk data. This could be just an individual row or it could be bulk data, a huge piece of data that is going to be moved.
Now, in the past, when the shard key of an existing record was going to be updated, we basically had to remove that row from the table, so moving it to a temporary table or moving it to another location. Basically, you're deleting the row, and then change the value and reinsert the row so the row would then be inserted into the proper location.
That causes a lot of work and requires specific code-writing and whatnot to manage those specific type of situations. And of course, if there is a lot of data, now, you're moving those bulk data in twice.
13:45
Lois: Yeah… you’re moving it to one location and then moving it back in. That’s a lot of double work, not to mention that it all needs to be managed manually, right? So, how has this process been improved?
Ron: So now, basically, you can just go ahead and update the value of the partition key, and then data will then automatically move to the new location. So this gives you complete flexibility of the shard key values.
This is also completely transparent, and again, completely managed behind the scenes. All you do is identify what is going to be changed. Then the database will maintain the actual data location and movement behind the scenes.
14:31
Lois: And what are some of the specific benefits of this feature?
Ron: Basically, it allows you to now be flexible, be able to update the shard key without having to worry about, oh, which location does this value have to exist? Do I have to delete it, reinsert it? And all of those different operations.
And this is done automatically by Oracle database, but it does require for you to enable row movement at the table level. So for tables that are expected to have partition key updates kind of without knowing when that happens, can happen, any time it happens by the clients directly or something, then we may need to enable row movement at the table level and leave it enabled. It does have tiny bit of overhead of maintaining these row locations behind the scenes when enabled, as it maintains some metadata behind the scenes.
But for cases that, let's say I know when the shard key is going to be changed, and we can use, let's say, a written procedure or something for that when the particular shard key is going to be changed. Then when the shard key is updated, the data will then automatically move to the new location based on that shard key operation. So we don't need to move the data manually in and out or to different locations.
16:03
Nikita: In our final segment, I want to bring up the update on splitting and moving a partition set, or basically subpartitioning tables and then being able to move all of the data associated with that in a bulk data move to a new location. Ron, can you explain how this process works?
Ron: This gives us a lot of flexibility for data management based on future requirements, size of the data, key changes, or key management requirements.
So generally when we use a composite sharding, remember, this is a combination of user-defined partitioning plus the system partitioning put together. That kind of defines a little bit more control over how the shards are, where the data is distributed evenly across the shards.
So sometimes based on this type of configuration, we may actually need to split partition and that can cause the shard key values to be now assigned to a new shard space based on the partitioning reconfiguration. So data, this needs to be automatically managed. So when you go ahead and split partition or partitionsets, then the data based on your configuration, based on your identification can automatically move to the new location automatically between those shard spaces.
17:32
Lois: What are some of the key advantages of this for clients?
Ron: This provides a huge benefit to clients because it allows them flexibility of better managing their configuration, expanding both configuration servers, the structures for better management of the data and the load. Data is completely online during all of this data move. Since this is being done behind the scenes by the database, it does not impact the availability of the data for anyone who is actually using the data.
And then, data is generally moved using transportable tablespaces in big bulk and big chunks. So it's almost like copying portions of the files. If you remember in Oracle database, we could take a backup of big files as image copy in pieces. This is kind of similar where chunks of data can then be moved and then transported if possible depending on the organization of the data itself for those particular partitions.
18:48
Lois: So, what does it look like in practice?
Ron: Well, clients now can go ahead and rearrange their data structure based on the adjustments of the partitioning that already exists within the sharded database. The bulk data move then automatically triggers once the customer execute the statement to go ahead and restructure the partitioning. And then all of the client, they're still accessing data. All of the data operation are completely maintained behind the scene.
19:28
Nikita: Thank you for joining us today, Ron. If you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Nikita Abraham…
Lois: And Lois Houston signing off!
19:51 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
90 episod
כל הפרקים
×Selamat datang ke Player FM
Player FM mengimbas laman-laman web bagi podcast berkualiti tinggi untuk anda nikmati sekarang. Ia merupakan aplikasi podcast terbaik dan berfungsi untuk Android, iPhone, dan web. Daftar untuk melaraskan langganan merentasi peranti.