WEBVTT 00:00.000 --> 00:09.320 Hi everyone, thank you for coming, my name is Lominoach, and this session is about 00:09.320 --> 00:10.320 skip a diff. 00:10.320 --> 00:16.000 This is a VTEST library for in-memory schema analysis, validation, organization, different 00:16.000 --> 00:17.640 and manipulation. 00:17.640 --> 00:23.640 So this is a mouthful, we're going to mostly concentrate around the idea of the diff, given 00:23.640 --> 00:32.800 to tables or to schema, to get the semantic diff that transforms from one schema to the 00:32.800 --> 00:33.800 other. 00:33.800 --> 00:37.440 This could be all the tables created, we'll drop view, whatever it takes to get from here 00:37.440 --> 00:39.160 to there. 00:39.160 --> 00:47.080 And what I want to do is to walk through what it takes for common schema, for schema 00:47.080 --> 00:54.200 diff, to get to that, to be able to do that. 00:54.200 --> 01:01.600 And mostly concentrate around the idea of the diff and consider a few things that we don't 01:01.600 --> 01:08.720 normally think about when we say diff, what does it take to get a diff, what is the implication 01:08.720 --> 01:14.560 of the diff, is it applicable, et cetera, so we'll go beyond a little bit beyond the diff. 01:14.560 --> 01:23.960 And mostly I want to show you how you can take schema diff back home, back to work, to 01:23.960 --> 01:29.560 the office, and apply very interesting things with it, either directly programmatically 01:29.560 --> 01:32.920 or through command line and tooling. 01:32.920 --> 01:40.200 All right, my name is Lomium VTEST maintainer, I've authored a bunch of open source tools 01:40.200 --> 01:47.920 and specifically, I've been around the schema management zone for like 15 years now in different 01:47.920 --> 01:50.760 capacities and different aspects. 01:50.760 --> 01:57.560 And with planet scale, planet scale offers a massive scale database as a service and one 01:57.560 --> 02:01.800 of the features or offering of client scale is the idea of branching. 02:01.800 --> 02:08.280 You have production branch, you can create your local branch where you are isolated from 02:08.280 --> 02:13.480 production, you can do whatever you want, change your schema, whatever, and at the end, 02:13.480 --> 02:19.000 issue a deploy request, which is like a getup deploy request, right, then this is where we 02:19.000 --> 02:24.600 look at the production schema, we look at your schema and we say, okay, this is the day 02:24.600 --> 02:28.760 for what does it take to move from here to there. 02:28.760 --> 02:35.520 And we've done many, many, many, many tens of thousands of schema deployments, our customers 02:35.520 --> 02:38.480 are doing that many deployments in production. 02:38.480 --> 02:44.240 So this is like the main driver and validator for schema-diff. 02:44.240 --> 02:48.200 But client scale is built on top of a test and schema-diff is a VTEST library. 02:48.200 --> 02:51.560 So planet scale is just a user of this library. 02:51.560 --> 02:57.600 VTEST is a CNCF graduate project, it's hard to describe in one sentence, but massive 02:57.680 --> 03:05.480 scale, sharding, clustering solution on top of my skill. 03:05.480 --> 03:13.720 The library schema-diff, VTEST uses it internally also for two different places. 03:13.720 --> 03:16.640 But we won't talk about schema-diff just in general. 03:16.640 --> 03:22.120 We do develop schema-diff, even though it's within VTEST, you can use it. 03:22.120 --> 03:26.280 And we make sure to have the minimal set of dependencies in schema-diff. 03:26.320 --> 03:32.240 If you all import it, you don't need to really import the entire gigantic code mix for 03:32.240 --> 03:36.840 VTEST, you'll get a more minimal dependent set rate. 03:36.840 --> 03:43.400 All right, so the most important thing to say about schema-diff is that it was designed 03:43.400 --> 03:49.960 to be fully in memory, fully go-lang, no Excel independence, specifically, no Maya skill. 03:50.000 --> 03:56.520 So we should be able to somehow-diff two tables, two schemas, by text, by maybe 03:56.520 --> 03:57.720 important from that of this. 03:57.720 --> 04:00.880 But by the time schema-diff sees this, it doesn't know anything about my skill. 04:00.880 --> 04:05.640 It just knows his text or his set of statements. 04:05.640 --> 04:12.680 This affects everything that we're going to talk about, because we lose a lot of information, 04:12.680 --> 04:18.120 right, the traditional ways to go to information schema if you want to analyze something, 04:18.200 --> 04:22.360 what are the keys, what are the columns you go to, this, this table, you go to the tables or 04:22.360 --> 04:27.320 columns, table, information schema, and you've got gather data here, we're going to have to 04:27.320 --> 04:32.200 need a different approach, and that different approach is going to be very powerful. 04:34.200 --> 04:37.400 As we'll see later, we can get superpowers with that. 04:37.400 --> 04:41.240 All right, a will show a lot of code. 04:41.240 --> 04:45.240 Mostly you don't need to parse it, you don't need to really know it, no need to worry about. 04:45.240 --> 04:50.120 I just want it to be available for you on this slide when you get back home. 04:50.120 --> 04:55.240 schema-diff does work on my skill 8 and 8.4. 04:55.240 --> 04:59.800 We don't support other flavors and not looking to support them. 04:59.800 --> 05:08.200 All right, so given text or two texts, right, we have two creatables or schemas, we want to 05:08.200 --> 05:09.240 be able to different. 05:09.240 --> 05:14.200 It first begins with me being able to understand the text. 05:14.200 --> 05:17.640 The first step for that is the parser. 05:17.640 --> 05:24.440 We need to be able to parse the text, and thankfully, we test no surprise. 05:24.440 --> 05:28.360 Understand, not only the my skill protocol is mentioned before, but also the my skill syntax 05:28.360 --> 05:34.520 and can parse any my skill compliant, almost any my skill compliant text. 05:34.520 --> 05:40.440 So the ESCLE parser library is one of the most fundamental libraries in the test. 05:40.440 --> 05:45.480 And it's a classic AST and abstract syntax repository parser. 05:45.480 --> 05:54.760 So given some text, I expect a skill parser to return a formal representation 05:56.120 --> 05:59.880 of whatever query is expressed in that text. 05:59.880 --> 06:08.760 So for example, if my query was a createable, then I expect the parser to return an object 06:08.840 --> 06:11.400 that is the structure of createable. 06:11.400 --> 06:16.920 And it kind of represents, yes, temporary, not temporary, yes, if it exists, not exists. 06:16.920 --> 06:20.280 Right, it's a more formal representation of this syntax. 06:21.880 --> 06:25.160 Specifically, inside here, we have table spec. 06:25.160 --> 06:30.440 Table spec is yet another construct where you can see all the columns or the indexes or the constraints, 06:30.440 --> 06:31.800 table options, et cetera. 06:32.840 --> 06:37.080 If you look at column definition, it has a name, it has a type. 06:37.160 --> 06:43.080 The type is yet another structure where you actually get the length, the character set, 06:43.080 --> 06:45.480 whatever, all the properties of the columns. 06:45.480 --> 06:47.800 So as you can see, this is the abstract syntax. 06:47.800 --> 06:51.400 Right, you begin with some root, which is like the createable in this example, 06:51.400 --> 06:55.480 and then goes down and down and down and drills into the particulars. 06:57.000 --> 06:58.600 That's one part of ASCLE parser. 06:58.600 --> 07:06.920 The second part is exporting, right, so I can take any statement or any such construct. 07:07.640 --> 07:11.480 And or any sub-construct and export it into a string. 07:12.280 --> 07:19.320 So far so good, great, parsing is not nearly enough to be able to really understand the schema. 07:19.800 --> 07:21.480 The next part is normalization. 07:21.960 --> 07:30.600 If you take a look at these two tables, they look different, but these two tables are identical. 07:32.120 --> 07:34.680 Primary key can be defined in different ways. 07:35.160 --> 07:38.200 The user gives the text, right, I can't control what the user gives, 07:38.200 --> 07:40.200 but they can define it like this or this. 07:40.920 --> 07:43.800 Upper key is lower key, it doesn't matter, 11, it doesn't matter. 07:43.800 --> 07:47.800 If the column defaults now, it's the same if you don't define it at all, right. 07:49.560 --> 07:54.760 In my spell 8.0 and above, this is the default character set. 07:54.760 --> 07:58.360 So if that's the default character set, I can either define it or not define it. 07:58.440 --> 08:05.000 So it's all equal and I need a way to decide what it is that I'm comparing. 08:05.000 --> 08:10.920 So the second part in schema-diff is normalization and schema-diff chooses. 08:10.920 --> 08:16.040 It uses reads the query, normalize the stuff, strips down stuff that 08:16.600 --> 08:23.560 like I mentioned before, this example and export it back or can export it back with the parser 08:23.560 --> 08:26.840 into the most minimalistic representation it can. 08:29.320 --> 08:33.400 It's still not nearly enough because next comes validation. 08:34.360 --> 08:39.560 The parser can give me syntax errors, but take a look at this table. 08:40.360 --> 08:45.480 It is perfectly valid syntax-wise, but it's completely meaningless. 08:45.480 --> 08:48.040 It's just not a realistic table. 08:48.040 --> 08:52.200 Two different primary key is key over in non-existent column. 08:52.680 --> 08:58.520 A foreign key constraint that has a different number of columns referenced and referencing. 08:59.160 --> 09:06.920 Right, this is not a valid table. So schema-diff needs to go an extra mile and now validate 09:07.320 --> 09:15.640 the design of the table or of the schema. Essentially doing what my skill does, but in memory 09:15.640 --> 09:22.040 programmatically in goal and in the test, which it does. So we validate existence of columns 09:22.040 --> 09:29.800 and foreign keys and all that it takes, but not only in the table scope, but also in the entire 09:29.800 --> 09:35.000 schema scope. You may be aware that these sequence of steps is valid in my skill. 09:35.000 --> 09:38.200 You can create a table on which view depends on it. 09:39.320 --> 09:45.000 And you will only find out that the schema is broken when you try to select from the view 09:45.000 --> 09:51.800 or export an input it. In a similar way, although it takes more effort, you can break foreign 09:51.800 --> 10:00.840 keys dependencies in my skill. So schema-diff is also about validating cross-stable or cross-view 10:00.840 --> 10:06.040 dependencies, checking that foreign keys do exist. There isn't index cover in the columns in the 10:06.040 --> 10:13.640 reference table and the column type matches that between tables, etc. So it's really a lot of heavyweight 10:13.640 --> 10:22.200 work to re-apply the same checks that my skill would do for you. Of course, if you go the 10:22.760 --> 10:28.840 information schema way, it's easier. You do show createable. If my skill was happy to allow the 10:28.840 --> 10:34.920 table to exist, you don't need to do all these things. So it's easier and this is more difficult 10:34.920 --> 10:42.760 and still it pays off. All right, so now that we are able to parse normalize and validate, 10:43.800 --> 10:52.200 I'm now finally able to get a text and from that text to get a create statement or a sequence 10:52.200 --> 10:59.000 of create statements which I can iterate and export or do whatever I like. Which finally brings us 10:59.000 --> 11:08.280 to the div, except wait, there's a slide before. If you're not doing go-land and you're asking 11:08.280 --> 11:15.720 yourself, I can't use this because I'm doing Python, Java, whatever. We do have a theme wrapper 11:17.320 --> 11:22.520 open source Apache to command line tool called schema-diff which gives some bare essentials of the 11:22.520 --> 11:28.120 schema-diff library. You can run it from the command line. For example, this is how I load a file 11:29.560 --> 11:35.320 schema file. You can load from database, etc. And I'll show a couple more examples later on. 11:36.280 --> 11:44.760 All right, so now we're good to discuss the div. I've showed you earlier the table spec, 11:44.760 --> 11:50.680 right where we have the list of columns indexes and constraints, etc. You can kind of be 11:50.680 --> 11:59.640 in to understand how this works programmatically. If I have two of these and one table has a list of 11:59.720 --> 12:05.560 six columns and the other has a list of seven columns. Well, let's probably an add column embedded 12:05.560 --> 12:11.640 in this div. Of course, it's more nuanced. You need to check the codes which one is being 12:11.640 --> 12:20.120 added. You need to check the names, etc. There's really a lot of nuances. The order of columns 12:20.120 --> 12:26.520 really matters when you're differentiating between two tables. Because if the other is different, 12:26.600 --> 12:34.760 we need to do, like, out the table, modify column after this or first, etc. But the order of 12:34.760 --> 12:40.760 indexes doesn't matter. It has zero impact on your schema design. Similarly, for constraints, 12:40.760 --> 12:47.000 the order doesn't matter. However, if you compare the names, then of course the names of columns 12:47.000 --> 12:53.640 are super important in the same for indexes because you might be using four indexes or using the 12:54.600 --> 12:59.640 name. Again, not for constraints. So it's really, really very nuanced. There's a 12:59.640 --> 13:05.640 mirror of checks that we do when they think to just make sure we're doing the right thing 13:05.640 --> 13:15.080 and not producing unnecessarily changes. So far, so good. Generally speaking, everything I'm going to say 13:15.080 --> 13:21.960 today is nuanced. Has nuanced says, except for the things that don't. So feel free to 13:22.120 --> 13:31.960 to heckle me later on and I'll explain where the nuances are like. All right. So we're obviously 13:31.960 --> 13:37.960 not going to look into the entire code that does the diff, but we get the idea. And so it's 13:37.960 --> 13:47.720 schema. If I'm able to compare, for example, two tables or entire schemas and get a diff or a list of 13:48.680 --> 13:57.800 these that I can then range over iterate and print out to standard output, this would, this will 13:57.800 --> 14:04.520 be the diff. So far, so good. Cool. Same with the command line tool. You can do that. Same thing, 14:05.160 --> 14:16.200 just wrapped up nicely here. All right. The next thing is that the diff itself is a formal entity 14:16.760 --> 14:23.240 just like the parsed queries of formal entity. So is the diff. The diff that I get back says, 14:24.200 --> 14:30.360 what's the entity, what's the table name of the unit, whatever it is? What does the from look like 14:30.360 --> 14:38.360 and what is the tool look like? What is the SQL parser statement that this diff describes looks like? 14:39.800 --> 14:46.120 What does the diff look like as a text? So there's a gazillion building loss to the diff itself. 14:46.680 --> 14:53.560 So before we move on, so now that we've reached the diff, I want to talk a lot beyond the diff. 14:53.560 --> 14:59.240 But before that, I want to give you a few cool things, really interesting things you can do and we do 14:59.240 --> 15:07.720 do with schema diff. So to begin with, I wanted to listen to the SQL parser walk function. 15:08.280 --> 15:15.800 Walk is a visital design pattern function. It's a function that visits every element in your 15:16.680 --> 15:19.560 three and for every element, it allows you to do something. 15:23.080 --> 15:28.760 And this is a powerful tool for lint in your schema, lint in your changes, 15:29.880 --> 15:37.160 checking for stuff that you disapprove of, right, manipulating, cloning. I give you an example. 15:37.800 --> 15:46.040 I'm different to schemas. There's a list of changes. This could be 15:46.040 --> 15:53.960 all-to-table, create view, drop you whatever. I'm going one by one, I'm walking the changes. 15:55.080 --> 16:03.320 For every change I'm checking is this change a drop table or is there a drop table within the 16:04.200 --> 16:08.840 change, is this change a drop column or it contains a drop column? If yes, 16:10.120 --> 16:16.040 this is a bit of a dangerous situation, this is a risky, right, this is a diff that has 16:16.680 --> 16:23.640 danger inside. We actually do that in planet scale, right, when a user drops a table, drops a column. 16:23.640 --> 16:28.360 It's there, right, they're allowed to drop table drop a column and do the flow request. 16:28.360 --> 16:34.440 But we put like big red letters, hey, that's fine, you're good. We don't want to alarm you, 16:34.440 --> 16:41.880 but you might lose data as a result of this deployment, right. So this is a way of lint in the diff. 16:43.480 --> 16:49.240 You can think of other ways of lint in schemas, like make sure this schema has a primary key, 16:49.240 --> 16:54.440 or is there a foreign key? Do you allow foreign keys in your schemas? Yes, no. Do you allow 16:54.440 --> 16:59.880 cascading foreign keys? You can really drill down to the details and it's easy to do that with 16:59.880 --> 17:09.320 schemas schematically. Granted, these examples are also easily achievable, which is a simple 17:09.320 --> 17:16.200 regular expression or a substrate search. That's fine, but there are more interesting. Just the other 17:16.280 --> 17:23.800 days, Lefla had mentioned in the Belgian pre-forced in days. The idea to lint in index, 17:24.920 --> 17:33.320 Jean-Pherson was also in the discussion, that the last part of the secondary index shouldn't be 17:33.320 --> 17:40.520 a prefix of a column index by the primary key, right. It's not crazy things that 17:41.160 --> 17:47.880 they require a lot more semantics to handle, and are easily achievable in schemas, 17:47.880 --> 17:54.120 if I can write that in a few minutes plus testing, of course, but it's kind of easy to do here. 17:56.120 --> 18:04.040 So far so good? All right. How about the diff itself? Why do we need the diff? So three, 18:04.040 --> 18:10.680 very, really examples. One is of course, I'm deploying two production, right? I'm working at 18:10.680 --> 18:19.720 GitHub. I have designed some changes to my tables. I want to ship that to production. 18:20.120 --> 18:26.120 My code is in git. I want to somehow make production look like my git code. So we need to be 18:26.120 --> 18:33.400 able to compare production with my git data and then apply those changes. Classic. 18:34.120 --> 18:40.360 Some environments are not easily deployable, like your testing environment, staging environment, 18:40.360 --> 18:47.000 et cetera. And so maybe you don't have the automated tooling, you just want to know that your 18:47.000 --> 18:51.720 testing environment really looks like production right now, right? It's just for you. 18:52.600 --> 18:57.560 You pull this game of production, you look into your testing deployment, you do the digs. 18:58.280 --> 19:06.440 Or third option. A two-like orchestrator has a backend database. It deploys itself. 19:06.440 --> 19:10.920 It's backend database. You don't need to deploy it on its behalf. You just run the next version of 19:10.920 --> 19:18.280 orchestrator. Any deploys its own backend tables. The same actually happens with vites, vites not 19:18.280 --> 19:26.920 not only allows users to access massively, show the databases and schemas in their own design. 19:27.000 --> 19:36.760 It also rides the idea that it runs my skill to run some internal management tables in a 19:36.760 --> 19:45.640 specialized schema, the site card and the school vites schema, where it presents stuff like workflows, 19:45.640 --> 19:56.840 schema migrations, even hard bits. So as vites starts up, it reads its own schema definition 19:57.640 --> 20:05.800 from code. It looks into whatever it is that it finds or does not find in the backend table. 20:05.800 --> 20:11.640 And if there are necessary changes, it applies them upon startup before opening connections. 20:12.840 --> 20:22.920 Makes sense? Cool. Let's go beyond getting the div. So the next weird step is that schema div, 20:23.080 --> 20:33.080 the schema div lets you apply the changes in memory. So we have a schema. We parse the schema, 20:33.080 --> 20:40.120 we validate it. We got a div from somewhere maybe compared to another schema. And then schema div 20:40.120 --> 20:47.320 lets you apply that div onto your schema. Effectively running the schema change in memory. 20:48.200 --> 20:54.920 Well, it is not an existing table because it is in memory. Why? Why would we need to do that? 20:55.480 --> 21:03.800 So this is very powerful and it powers at least two very important things that we do one in vites, 21:03.800 --> 21:09.160 one in in playing skill. I'll talk about just one of them, the one in 21:10.120 --> 21:19.560 it's useful for everyone. Let's say put it like that. So getting the list of statements from a div 21:20.280 --> 21:24.040 is not enough. If you compare two schemas in Telbeau, you need to create that table, 21:24.040 --> 21:29.080 drop that table, modify that view. That is not enough. There is not enough information 21:29.720 --> 21:38.920 in that output because order matters or a valid sequence matters. If you create table and create 21:39.720 --> 21:43.800 that depends on that table. Of course, you need to first create the table and then create the view. 21:43.800 --> 21:48.920 If you want to drop the two, if you want to do it right, you should first drop the table, 21:48.920 --> 21:54.200 then drop the view. With foreign keys, there is a myriad of other operations that you 21:55.080 --> 22:02.440 absolutely have to do in the right order or else, it will fail. Some sequences will fail. 22:02.840 --> 22:14.120 So now we want to find the valid sequence. Not only do tables and views and foreign keys create dependencies in your schema, 22:14.120 --> 22:21.720 the changes to those tables and views in the div, these changes themselves have dependency three, 22:22.280 --> 22:28.760 have dependency three, so we need to figure out the tree. In a ev solution, 22:29.160 --> 22:36.200 would we to try all the permutations of all the changes, right? If you were on your my skill 22:36.200 --> 22:40.120 server, you would just try to run your query. Oh, that doesn't run. Let me run the second one. 22:40.120 --> 22:45.320 Oh, okay, this one. All right, very the first one again. But with schema, if you can do something 22:45.320 --> 22:50.920 in memory, because we know schema, if you're able to validate the schema, and it knows how to apply 22:51.080 --> 22:58.280 a change. So let's imagine that I take a schema in schema, and apply this change. There's 22:58.280 --> 23:03.400 this work. Yes, good. Does this work? Yes, good. Does this work? No, because maybe this depends on 23:03.400 --> 23:11.720 another table, I don't know. We can revert back to the previous schema in memory and try the next one 23:11.720 --> 23:19.640 or try the next one. Worst case scenario, actually this is really terrible. It's the permutation 23:19.640 --> 23:26.120 of seven operations, the factorial of seven, that's a lot of operations. Schemeidif is smart 23:26.120 --> 23:32.600 and then that not that name. It can identify which leafs could potentially be conflicting and which 23:32.600 --> 23:40.600 do not. So it can reduce this factorial of seven into factorial of three times factorial of two 23:41.800 --> 23:48.680 times one times one, so it's a lot smaller. So Schemeidif is able to try 23:49.720 --> 23:56.520 different permutations and tell you, okay, of these changes, there is this specific order, 23:56.520 --> 24:02.360 which with which you should apply the changes, or give you an error if there is no such order. 24:03.480 --> 24:11.640 Okay, so far, cool. So we're talking about now ordering this, so you can get a list of order 24:11.720 --> 24:19.800 this, programmatically or through the command name. I want to skip some other examples 24:21.400 --> 24:29.160 and just now mention why we choose to do that in memory, because if a user has 1000 tables 24:29.160 --> 24:36.280 and they change 10 of those tables, then using my Schemeidif information schema, I would need to scan 24:36.920 --> 24:46.200 1000 tables times two, because I would need to first apply the new 1000 table schema and then compare 24:46.200 --> 24:54.200 1000, read information schema for all these tables, only to find only really 10 tables are changed, 24:54.200 --> 24:59.160 but then how do I figure the order? Because if I'm dependent on my skill, I would need to actually 24:59.160 --> 25:04.520 apply articles and create those in drop tables. And if I had to do that permutations 25:05.480 --> 25:11.480 on a live server, or even in Docker or whatever, it would just take time, it takes time. 25:11.480 --> 25:18.360 We know it takes time, like multi-minute time, whereas with Schemeidif, we can handle these numbers 25:18.360 --> 25:25.560 in sub-second and not a lot of memory, a few megabytes, a few tens of megabytes of memory in bed 25:25.560 --> 25:30.840 conditions, so that's really useful to us. I only have like a few minutes? 25:31.720 --> 25:42.440 Five, five, five. Perfect. So, again, skipping some features, but I want to further discuss 25:44.280 --> 25:52.680 that even that order this is not enough. It still does not guarantee success of the migration. 25:53.320 --> 25:59.320 We've ordered everything, the sequence looks correct, we're applying an audit table and it breaks. 26:00.280 --> 26:08.440 Why is that? There could be multiple conditions. I could be reducing data scope, for example. 26:09.240 --> 26:16.200 I could be changing the column from int to big int and sign. It's bigger on the positive side, 26:17.080 --> 26:21.560 but if I have the value of minus one, this migration will fail, it should fail. 26:21.880 --> 26:28.360 I hope it will fail. If I'm turning a nullable column to not now, 26:29.880 --> 26:36.120 or that's fine, but if you want to come back, the fact that you removed uniconstrate means that coming 26:36.120 --> 26:41.560 back, you re-ad uniconstrate, so that's something we want to know ahead of time. 26:43.320 --> 26:50.200 More, even more, Schemeidif can predict not fully, there are stuff that are stateful, 26:50.280 --> 26:54.840 can predict ahead of time whether a migration or a set of changes are eligible for instant 26:54.840 --> 27:01.800 video. Today, you will just try it, you will write out the table, or I want to run it only 27:01.800 --> 27:07.480 if it's instant, you will write out the table, algorithm equals instant, and if it's not instant 27:07.480 --> 27:13.800 to vote, it will fail, but you will have to try it. How can you know it? Schemeidif will tell you, 27:14.280 --> 27:16.520 this is eligible for online video. 27:24.360 --> 27:31.320 Explain again, sorry. Correct. If you have minor versions, 27:31.320 --> 27:40.360 yes, correct. So Schemeidif is a minor version aware, right? It knows what was added in what version. 27:41.320 --> 27:47.160 Okay, I'm basically out of time, so there's more for this, but I want to leave a couple of 27:47.160 --> 27:51.880 minutes for questions if I can. Thank you.